00:00:00.000 Started by upstream project "autotest-per-patch" build number 127147 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.116 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.117 The recommended git tool is: git 00:00:00.117 using credential 00000000-0000-0000-0000-000000000002 00:00:00.131 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.186 Fetching changes from the remote Git repository 00:00:00.188 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.233 Using shallow fetch with depth 1 00:00:00.233 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.233 > git --version # timeout=10 00:00:00.264 > git --version # 'git version 2.39.2' 00:00:00.264 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.292 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.292 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.366 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.377 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.388 Checking out Revision c396a3cd44e4090a57fb151c18fefbf4a9bd324b (FETCH_HEAD) 00:00:07.388 > git config core.sparsecheckout # timeout=10 00:00:07.399 > git read-tree -mu HEAD # timeout=10 00:00:07.414 > git checkout -f c396a3cd44e4090a57fb151c18fefbf4a9bd324b # timeout=5 00:00:07.434 Commit message: "jenkins/jjb-config: Use freebsd14 for the pkgdep-freebsd job" 00:00:07.434 > git rev-list --no-walk c396a3cd44e4090a57fb151c18fefbf4a9bd324b # timeout=10 00:00:07.513 [Pipeline] Start of Pipeline 00:00:07.523 [Pipeline] library 00:00:07.525 Loading library shm_lib@master 00:00:07.525 Library shm_lib@master is cached. Copying from home. 00:00:07.538 [Pipeline] node 00:00:07.545 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.547 [Pipeline] { 00:00:07.556 [Pipeline] catchError 00:00:07.558 [Pipeline] { 00:00:07.571 [Pipeline] wrap 00:00:07.581 [Pipeline] { 00:00:07.589 [Pipeline] stage 00:00:07.591 [Pipeline] { (Prologue) 00:00:07.768 [Pipeline] sh 00:00:08.054 + logger -p user.info -t JENKINS-CI 00:00:08.071 [Pipeline] echo 00:00:08.072 Node: WFP22 00:00:08.082 [Pipeline] sh 00:00:08.435 [Pipeline] setCustomBuildProperty 00:00:08.447 [Pipeline] echo 00:00:08.448 Cleanup processes 00:00:08.453 [Pipeline] sh 00:00:08.732 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.732 3590288 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.746 [Pipeline] sh 00:00:09.030 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.030 ++ grep -v 'sudo pgrep' 00:00:09.030 ++ awk '{print $1}' 00:00:09.030 + sudo kill -9 00:00:09.030 + true 00:00:09.045 [Pipeline] cleanWs 00:00:09.054 [WS-CLEANUP] Deleting project workspace... 00:00:09.054 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.060 [WS-CLEANUP] done 00:00:09.064 [Pipeline] setCustomBuildProperty 00:00:09.076 [Pipeline] sh 00:00:09.356 + sudo git config --global --replace-all safe.directory '*' 00:00:09.440 [Pipeline] httpRequest 00:00:10.494 [Pipeline] echo 00:00:10.496 Sorcerer 10.211.164.101 is alive 00:00:10.505 [Pipeline] httpRequest 00:00:10.510 HttpMethod: GET 00:00:10.511 URL: http://10.211.164.101/packages/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:10.511 Sending request to url: http://10.211.164.101/packages/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:10.533 Response Code: HTTP/1.1 200 OK 00:00:10.533 Success: Status code 200 is in the accepted range: 200,404 00:00:10.534 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:25.230 [Pipeline] sh 00:00:25.514 + tar --no-same-owner -xf jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:25.530 [Pipeline] httpRequest 00:00:25.559 [Pipeline] echo 00:00:25.561 Sorcerer 10.211.164.101 is alive 00:00:25.571 [Pipeline] httpRequest 00:00:25.576 HttpMethod: GET 00:00:25.577 URL: http://10.211.164.101/packages/spdk_6f18624d4dad6e4ce0db8ef9c88f9af541785fdd.tar.gz 00:00:25.577 Sending request to url: http://10.211.164.101/packages/spdk_6f18624d4dad6e4ce0db8ef9c88f9af541785fdd.tar.gz 00:00:25.600 Response Code: HTTP/1.1 200 OK 00:00:25.601 Success: Status code 200 is in the accepted range: 200,404 00:00:25.601 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_6f18624d4dad6e4ce0db8ef9c88f9af541785fdd.tar.gz 00:01:35.012 [Pipeline] sh 00:01:35.296 + tar --no-same-owner -xf spdk_6f18624d4dad6e4ce0db8ef9c88f9af541785fdd.tar.gz 00:01:37.843 [Pipeline] sh 00:01:38.123 + git -C spdk log --oneline -n5 00:01:38.123 6f18624d4 python/rpc: Python rpc call generator. 00:01:38.123 da8d49b2f python/rpc: Replace bdev.py with generated rpc's 00:01:38.123 8711e7e9b autotest: reduce accel tests runs with SPDK_TEST_ACCEL flag 00:01:38.123 50222f810 configure: don't exit on non Intel platforms 00:01:38.123 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:01:38.134 [Pipeline] } 00:01:38.151 [Pipeline] // stage 00:01:38.159 [Pipeline] stage 00:01:38.161 [Pipeline] { (Prepare) 00:01:38.178 [Pipeline] writeFile 00:01:38.195 [Pipeline] sh 00:01:38.533 + logger -p user.info -t JENKINS-CI 00:01:38.544 [Pipeline] sh 00:01:38.823 + logger -p user.info -t JENKINS-CI 00:01:38.835 [Pipeline] sh 00:01:39.117 + cat autorun-spdk.conf 00:01:39.117 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.117 SPDK_TEST_NVMF=1 00:01:39.117 SPDK_TEST_NVME_CLI=1 00:01:39.117 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.117 SPDK_TEST_NVMF_NICS=e810 00:01:39.117 SPDK_TEST_VFIOUSER=1 00:01:39.117 SPDK_RUN_UBSAN=1 00:01:39.117 NET_TYPE=phy 00:01:39.124 RUN_NIGHTLY=0 00:01:39.128 [Pipeline] readFile 00:01:39.154 [Pipeline] withEnv 00:01:39.156 [Pipeline] { 00:01:39.171 [Pipeline] sh 00:01:39.454 + set -ex 00:01:39.454 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:39.454 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:39.454 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.454 ++ SPDK_TEST_NVMF=1 00:01:39.454 ++ SPDK_TEST_NVME_CLI=1 00:01:39.454 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.454 ++ SPDK_TEST_NVMF_NICS=e810 00:01:39.454 ++ SPDK_TEST_VFIOUSER=1 00:01:39.454 ++ SPDK_RUN_UBSAN=1 00:01:39.454 ++ NET_TYPE=phy 00:01:39.454 ++ RUN_NIGHTLY=0 00:01:39.454 + case $SPDK_TEST_NVMF_NICS in 00:01:39.454 + DRIVERS=ice 00:01:39.454 + [[ tcp == \r\d\m\a ]] 00:01:39.454 + [[ -n ice ]] 00:01:39.454 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:39.454 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:39.454 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:39.454 rmmod: ERROR: Module irdma is not currently loaded 00:01:39.454 rmmod: ERROR: Module i40iw is not currently loaded 00:01:39.454 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:39.454 + true 00:01:39.454 + for D in $DRIVERS 00:01:39.454 + sudo modprobe ice 00:01:39.454 + exit 0 00:01:39.462 [Pipeline] } 00:01:39.477 [Pipeline] // withEnv 00:01:39.481 [Pipeline] } 00:01:39.496 [Pipeline] // stage 00:01:39.501 [Pipeline] catchError 00:01:39.503 [Pipeline] { 00:01:39.514 [Pipeline] timeout 00:01:39.514 Timeout set to expire in 50 min 00:01:39.515 [Pipeline] { 00:01:39.524 [Pipeline] stage 00:01:39.525 [Pipeline] { (Tests) 00:01:39.534 [Pipeline] sh 00:01:39.815 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:39.815 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:39.815 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:39.815 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:39.815 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:39.815 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:39.815 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:39.815 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:39.815 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:39.815 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:39.815 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:39.815 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:39.815 + source /etc/os-release 00:01:39.815 ++ NAME='Fedora Linux' 00:01:39.815 ++ VERSION='38 (Cloud Edition)' 00:01:39.815 ++ ID=fedora 00:01:39.815 ++ VERSION_ID=38 00:01:39.815 ++ VERSION_CODENAME= 00:01:39.815 ++ PLATFORM_ID=platform:f38 00:01:39.815 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:39.815 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:39.815 ++ LOGO=fedora-logo-icon 00:01:39.815 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:39.815 ++ HOME_URL=https://fedoraproject.org/ 00:01:39.815 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:39.815 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:39.815 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:39.815 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:39.815 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:39.815 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:39.815 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:39.815 ++ SUPPORT_END=2024-05-14 00:01:39.815 ++ VARIANT='Cloud Edition' 00:01:39.815 ++ VARIANT_ID=cloud 00:01:39.815 + uname -a 00:01:39.815 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:39.815 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:43.106 Hugepages 00:01:43.106 node hugesize free / total 00:01:43.106 node0 1048576kB 0 / 0 00:01:43.106 node0 2048kB 0 / 0 00:01:43.106 node1 1048576kB 0 / 0 00:01:43.106 node1 2048kB 0 / 0 00:01:43.106 00:01:43.106 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:43.106 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:43.106 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:43.106 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:43.106 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:43.106 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:43.106 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:43.107 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:43.107 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:43.107 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:43.107 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:43.107 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:43.107 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:43.107 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:43.107 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:43.107 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:43.107 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:43.107 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:43.107 + rm -f /tmp/spdk-ld-path 00:01:43.107 + source autorun-spdk.conf 00:01:43.107 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.107 ++ SPDK_TEST_NVMF=1 00:01:43.107 ++ SPDK_TEST_NVME_CLI=1 00:01:43.107 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.107 ++ SPDK_TEST_NVMF_NICS=e810 00:01:43.107 ++ SPDK_TEST_VFIOUSER=1 00:01:43.107 ++ SPDK_RUN_UBSAN=1 00:01:43.107 ++ NET_TYPE=phy 00:01:43.107 ++ RUN_NIGHTLY=0 00:01:43.107 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:43.107 + [[ -n '' ]] 00:01:43.107 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.107 + for M in /var/spdk/build-*-manifest.txt 00:01:43.107 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:43.107 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:43.107 + for M in /var/spdk/build-*-manifest.txt 00:01:43.107 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:43.107 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:43.107 ++ uname 00:01:43.107 + [[ Linux == \L\i\n\u\x ]] 00:01:43.107 + sudo dmesg -T 00:01:43.107 + sudo dmesg --clear 00:01:43.107 + dmesg_pid=3591209 00:01:43.107 + [[ Fedora Linux == FreeBSD ]] 00:01:43.107 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.107 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.107 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:43.107 + [[ -x /usr/src/fio-static/fio ]] 00:01:43.107 + sudo dmesg -Tw 00:01:43.107 + export FIO_BIN=/usr/src/fio-static/fio 00:01:43.107 + FIO_BIN=/usr/src/fio-static/fio 00:01:43.107 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:43.107 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:43.107 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:43.107 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.107 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.107 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:43.107 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.107 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.107 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:43.107 Test configuration: 00:01:43.107 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.107 SPDK_TEST_NVMF=1 00:01:43.107 SPDK_TEST_NVME_CLI=1 00:01:43.107 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.107 SPDK_TEST_NVMF_NICS=e810 00:01:43.107 SPDK_TEST_VFIOUSER=1 00:01:43.107 SPDK_RUN_UBSAN=1 00:01:43.107 NET_TYPE=phy 00:01:43.107 RUN_NIGHTLY=0 10:16:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:43.107 10:16:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:43.107 10:16:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:43.107 10:16:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:43.107 10:16:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.107 10:16:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.107 10:16:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.107 10:16:46 -- paths/export.sh@5 -- $ export PATH 00:01:43.107 10:16:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.107 10:16:46 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:43.107 10:16:46 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:43.107 10:16:46 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721895406.XXXXXX 00:01:43.107 10:16:46 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721895406.sU07UO 00:01:43.107 10:16:46 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:43.107 10:16:46 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:43.107 10:16:46 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:43.107 10:16:46 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:43.107 10:16:46 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:43.107 10:16:46 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:43.107 10:16:46 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:43.107 10:16:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.107 10:16:46 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:43.107 10:16:46 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:43.107 10:16:46 -- pm/common@17 -- $ local monitor 00:01:43.107 10:16:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.107 10:16:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.107 10:16:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.107 10:16:46 -- pm/common@21 -- $ date +%s 00:01:43.107 10:16:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.107 10:16:46 -- pm/common@21 -- $ date +%s 00:01:43.107 10:16:46 -- pm/common@21 -- $ date +%s 00:01:43.107 10:16:46 -- pm/common@25 -- $ sleep 1 00:01:43.107 10:16:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721895406 00:01:43.107 10:16:46 -- pm/common@21 -- $ date +%s 00:01:43.107 10:16:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721895406 00:01:43.107 10:16:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721895406 00:01:43.107 10:16:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721895406 00:01:43.107 Traceback (most recent call last): 00:01:43.107 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:01:43.107 import spdk.rpc as rpc # noqa 00:01:43.107 ^^^^^^^^^^^^^^^^^^^^^^ 00:01:43.107 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/rpc/__init__.py", line 13, in 00:01:43.107 from . import bdev 00:01:43.107 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/rpc/bdev.py", line 8, in 00:01:43.107 from spdk.rpc.rpc import * 00:01:43.107 ModuleNotFoundError: No module named 'spdk.rpc.rpc' 00:01:43.107 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721895406_collect-cpu-temp.pm.log 00:01:43.107 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721895406_collect-vmstat.pm.log 00:01:43.107 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721895406_collect-cpu-load.pm.log 00:01:43.107 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721895406_collect-bmc-pm.bmc.pm.log 00:01:44.045 10:16:47 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:44.045 10:16:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:44.045 10:16:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:44.045 10:16:47 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.045 10:16:47 -- spdk/autobuild.sh@16 -- $ date -u 00:01:44.045 Thu Jul 25 08:16:47 AM UTC 2024 00:01:44.045 10:16:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:44.045 v24.09-pre-313-g6f18624d4 00:01:44.045 10:16:47 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:44.045 10:16:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:44.045 10:16:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:44.045 10:16:47 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:44.046 10:16:47 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:44.046 10:16:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.046 ************************************ 00:01:44.046 START TEST ubsan 00:01:44.046 ************************************ 00:01:44.046 10:16:47 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:44.046 using ubsan 00:01:44.046 00:01:44.046 real 0m0.001s 00:01:44.046 user 0m0.000s 00:01:44.046 sys 0m0.000s 00:01:44.046 10:16:47 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:44.046 10:16:47 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:44.046 ************************************ 00:01:44.046 END TEST ubsan 00:01:44.046 ************************************ 00:01:44.046 10:16:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:44.046 10:16:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:44.046 10:16:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:44.046 10:16:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:44.046 10:16:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:44.046 10:16:47 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:44.046 10:16:47 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:44.046 10:16:47 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:44.046 10:16:47 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:44.305 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:44.305 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:44.564 Using 'verbs' RDMA provider 00:02:00.390 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:12.638 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:12.638 Creating mk/config.mk...done. 00:02:12.638 Creating mk/cc.flags.mk...done. 00:02:12.638 Type 'make' to build. 00:02:12.638 10:17:15 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:12.638 10:17:15 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:12.638 10:17:15 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:12.638 10:17:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.638 ************************************ 00:02:12.638 START TEST make 00:02:12.638 ************************************ 00:02:12.638 10:17:15 make -- common/autotest_common.sh@1125 -- $ make -j112 00:02:13.581 The Meson build system 00:02:13.581 Version: 1.3.1 00:02:13.581 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:13.581 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:13.581 Build type: native build 00:02:13.581 Project name: libvfio-user 00:02:13.581 Project version: 0.0.1 00:02:13.581 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:13.581 C linker for the host machine: cc ld.bfd 2.39-16 00:02:13.581 Host machine cpu family: x86_64 00:02:13.581 Host machine cpu: x86_64 00:02:13.581 Run-time dependency threads found: YES 00:02:13.581 Library dl found: YES 00:02:13.581 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:13.581 Run-time dependency json-c found: YES 0.17 00:02:13.581 Run-time dependency cmocka found: YES 1.1.7 00:02:13.581 Program pytest-3 found: NO 00:02:13.581 Program flake8 found: NO 00:02:13.581 Program misspell-fixer found: NO 00:02:13.581 Program restructuredtext-lint found: NO 00:02:13.581 Program valgrind found: YES (/usr/bin/valgrind) 00:02:13.581 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:13.581 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:13.581 Compiler for C supports arguments -Wwrite-strings: YES 00:02:13.581 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:13.581 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:13.581 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:13.581 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:13.581 Build targets in project: 8 00:02:13.581 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:13.581 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:13.581 00:02:13.581 libvfio-user 0.0.1 00:02:13.581 00:02:13.581 User defined options 00:02:13.581 buildtype : debug 00:02:13.581 default_library: shared 00:02:13.581 libdir : /usr/local/lib 00:02:13.581 00:02:13.581 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.147 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:14.147 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:14.147 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:14.147 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:14.147 [4/37] Compiling C object samples/null.p/null.c.o 00:02:14.147 [5/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:14.147 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:14.147 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:14.147 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:14.147 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:14.147 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:14.147 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:14.147 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:14.147 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:14.147 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:14.147 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:14.147 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:14.147 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:14.147 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:14.147 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:14.147 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:14.147 [21/37] Compiling C object samples/client.p/client.c.o 00:02:14.147 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:14.147 [23/37] Compiling C object samples/server.p/server.c.o 00:02:14.147 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:14.147 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:14.147 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:14.147 [27/37] Linking target samples/client 00:02:14.147 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:14.147 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:14.405 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:14.405 [31/37] Linking target test/unit_tests 00:02:14.405 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:14.405 [33/37] Linking target samples/null 00:02:14.405 [34/37] Linking target samples/server 00:02:14.405 [35/37] Linking target samples/lspci 00:02:14.405 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:14.405 [37/37] Linking target samples/gpio-pci-idio-16 00:02:14.405 INFO: autodetecting backend as ninja 00:02:14.405 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:14.405 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:14.970 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:14.970 ninja: no work to do. 00:02:20.242 The Meson build system 00:02:20.242 Version: 1.3.1 00:02:20.242 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:20.242 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:20.242 Build type: native build 00:02:20.242 Program cat found: YES (/usr/bin/cat) 00:02:20.242 Project name: DPDK 00:02:20.242 Project version: 24.03.0 00:02:20.242 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:20.242 C linker for the host machine: cc ld.bfd 2.39-16 00:02:20.242 Host machine cpu family: x86_64 00:02:20.242 Host machine cpu: x86_64 00:02:20.242 Message: ## Building in Developer Mode ## 00:02:20.242 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:20.242 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:20.242 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:20.242 Program python3 found: YES (/usr/bin/python3) 00:02:20.242 Program cat found: YES (/usr/bin/cat) 00:02:20.242 Compiler for C supports arguments -march=native: YES 00:02:20.242 Checking for size of "void *" : 8 00:02:20.242 Checking for size of "void *" : 8 (cached) 00:02:20.242 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:20.242 Library m found: YES 00:02:20.242 Library numa found: YES 00:02:20.242 Has header "numaif.h" : YES 00:02:20.242 Library fdt found: NO 00:02:20.242 Library execinfo found: NO 00:02:20.242 Has header "execinfo.h" : YES 00:02:20.242 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:20.242 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:20.242 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:20.242 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:20.242 Run-time dependency openssl found: YES 3.0.9 00:02:20.242 Run-time dependency libpcap found: YES 1.10.4 00:02:20.242 Has header "pcap.h" with dependency libpcap: YES 00:02:20.242 Compiler for C supports arguments -Wcast-qual: YES 00:02:20.242 Compiler for C supports arguments -Wdeprecated: YES 00:02:20.242 Compiler for C supports arguments -Wformat: YES 00:02:20.242 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:20.242 Compiler for C supports arguments -Wformat-security: NO 00:02:20.242 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:20.242 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:20.242 Compiler for C supports arguments -Wnested-externs: YES 00:02:20.242 Compiler for C supports arguments -Wold-style-definition: YES 00:02:20.242 Compiler for C supports arguments -Wpointer-arith: YES 00:02:20.242 Compiler for C supports arguments -Wsign-compare: YES 00:02:20.242 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:20.242 Compiler for C supports arguments -Wundef: YES 00:02:20.242 Compiler for C supports arguments -Wwrite-strings: YES 00:02:20.242 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:20.243 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:20.243 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:20.243 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:20.243 Program objdump found: YES (/usr/bin/objdump) 00:02:20.243 Compiler for C supports arguments -mavx512f: YES 00:02:20.243 Checking if "AVX512 checking" compiles: YES 00:02:20.243 Fetching value of define "__SSE4_2__" : 1 00:02:20.243 Fetching value of define "__AES__" : 1 00:02:20.243 Fetching value of define "__AVX__" : 1 00:02:20.243 Fetching value of define "__AVX2__" : 1 00:02:20.243 Fetching value of define "__AVX512BW__" : 1 00:02:20.243 Fetching value of define "__AVX512CD__" : 1 00:02:20.243 Fetching value of define "__AVX512DQ__" : 1 00:02:20.243 Fetching value of define "__AVX512F__" : 1 00:02:20.243 Fetching value of define "__AVX512VL__" : 1 00:02:20.243 Fetching value of define "__PCLMUL__" : 1 00:02:20.243 Fetching value of define "__RDRND__" : 1 00:02:20.243 Fetching value of define "__RDSEED__" : 1 00:02:20.243 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:20.243 Fetching value of define "__znver1__" : (undefined) 00:02:20.243 Fetching value of define "__znver2__" : (undefined) 00:02:20.243 Fetching value of define "__znver3__" : (undefined) 00:02:20.243 Fetching value of define "__znver4__" : (undefined) 00:02:20.243 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:20.243 Message: lib/log: Defining dependency "log" 00:02:20.243 Message: lib/kvargs: Defining dependency "kvargs" 00:02:20.243 Message: lib/telemetry: Defining dependency "telemetry" 00:02:20.243 Checking for function "getentropy" : NO 00:02:20.243 Message: lib/eal: Defining dependency "eal" 00:02:20.243 Message: lib/ring: Defining dependency "ring" 00:02:20.243 Message: lib/rcu: Defining dependency "rcu" 00:02:20.243 Message: lib/mempool: Defining dependency "mempool" 00:02:20.243 Message: lib/mbuf: Defining dependency "mbuf" 00:02:20.243 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:20.243 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:20.243 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:20.243 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:20.243 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:20.243 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:20.243 Compiler for C supports arguments -mpclmul: YES 00:02:20.243 Compiler for C supports arguments -maes: YES 00:02:20.243 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:20.243 Compiler for C supports arguments -mavx512bw: YES 00:02:20.243 Compiler for C supports arguments -mavx512dq: YES 00:02:20.243 Compiler for C supports arguments -mavx512vl: YES 00:02:20.243 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:20.243 Compiler for C supports arguments -mavx2: YES 00:02:20.243 Compiler for C supports arguments -mavx: YES 00:02:20.243 Message: lib/net: Defining dependency "net" 00:02:20.243 Message: lib/meter: Defining dependency "meter" 00:02:20.243 Message: lib/ethdev: Defining dependency "ethdev" 00:02:20.243 Message: lib/pci: Defining dependency "pci" 00:02:20.243 Message: lib/cmdline: Defining dependency "cmdline" 00:02:20.243 Message: lib/hash: Defining dependency "hash" 00:02:20.243 Message: lib/timer: Defining dependency "timer" 00:02:20.243 Message: lib/compressdev: Defining dependency "compressdev" 00:02:20.243 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:20.243 Message: lib/dmadev: Defining dependency "dmadev" 00:02:20.243 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:20.243 Message: lib/power: Defining dependency "power" 00:02:20.243 Message: lib/reorder: Defining dependency "reorder" 00:02:20.243 Message: lib/security: Defining dependency "security" 00:02:20.243 Has header "linux/userfaultfd.h" : YES 00:02:20.243 Has header "linux/vduse.h" : YES 00:02:20.243 Message: lib/vhost: Defining dependency "vhost" 00:02:20.243 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:20.243 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:20.243 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:20.243 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:20.243 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:20.243 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:20.243 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:20.243 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:20.243 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:20.243 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:20.243 Program doxygen found: YES (/usr/bin/doxygen) 00:02:20.243 Configuring doxy-api-html.conf using configuration 00:02:20.243 Configuring doxy-api-man.conf using configuration 00:02:20.243 Program mandb found: YES (/usr/bin/mandb) 00:02:20.243 Program sphinx-build found: NO 00:02:20.243 Configuring rte_build_config.h using configuration 00:02:20.243 Message: 00:02:20.243 ================= 00:02:20.243 Applications Enabled 00:02:20.243 ================= 00:02:20.243 00:02:20.243 apps: 00:02:20.243 00:02:20.243 00:02:20.243 Message: 00:02:20.243 ================= 00:02:20.243 Libraries Enabled 00:02:20.243 ================= 00:02:20.243 00:02:20.243 libs: 00:02:20.243 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:20.243 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:20.243 cryptodev, dmadev, power, reorder, security, vhost, 00:02:20.243 00:02:20.243 Message: 00:02:20.243 =============== 00:02:20.243 Drivers Enabled 00:02:20.243 =============== 00:02:20.243 00:02:20.243 common: 00:02:20.243 00:02:20.243 bus: 00:02:20.243 pci, vdev, 00:02:20.243 mempool: 00:02:20.243 ring, 00:02:20.243 dma: 00:02:20.243 00:02:20.243 net: 00:02:20.243 00:02:20.243 crypto: 00:02:20.243 00:02:20.243 compress: 00:02:20.243 00:02:20.243 vdpa: 00:02:20.243 00:02:20.243 00:02:20.243 Message: 00:02:20.243 ================= 00:02:20.243 Content Skipped 00:02:20.243 ================= 00:02:20.243 00:02:20.243 apps: 00:02:20.243 dumpcap: explicitly disabled via build config 00:02:20.243 graph: explicitly disabled via build config 00:02:20.243 pdump: explicitly disabled via build config 00:02:20.243 proc-info: explicitly disabled via build config 00:02:20.243 test-acl: explicitly disabled via build config 00:02:20.243 test-bbdev: explicitly disabled via build config 00:02:20.243 test-cmdline: explicitly disabled via build config 00:02:20.243 test-compress-perf: explicitly disabled via build config 00:02:20.243 test-crypto-perf: explicitly disabled via build config 00:02:20.243 test-dma-perf: explicitly disabled via build config 00:02:20.243 test-eventdev: explicitly disabled via build config 00:02:20.243 test-fib: explicitly disabled via build config 00:02:20.243 test-flow-perf: explicitly disabled via build config 00:02:20.243 test-gpudev: explicitly disabled via build config 00:02:20.243 test-mldev: explicitly disabled via build config 00:02:20.243 test-pipeline: explicitly disabled via build config 00:02:20.243 test-pmd: explicitly disabled via build config 00:02:20.243 test-regex: explicitly disabled via build config 00:02:20.243 test-sad: explicitly disabled via build config 00:02:20.243 test-security-perf: explicitly disabled via build config 00:02:20.243 00:02:20.243 libs: 00:02:20.243 argparse: explicitly disabled via build config 00:02:20.243 metrics: explicitly disabled via build config 00:02:20.243 acl: explicitly disabled via build config 00:02:20.243 bbdev: explicitly disabled via build config 00:02:20.243 bitratestats: explicitly disabled via build config 00:02:20.243 bpf: explicitly disabled via build config 00:02:20.243 cfgfile: explicitly disabled via build config 00:02:20.243 distributor: explicitly disabled via build config 00:02:20.243 efd: explicitly disabled via build config 00:02:20.243 eventdev: explicitly disabled via build config 00:02:20.243 dispatcher: explicitly disabled via build config 00:02:20.243 gpudev: explicitly disabled via build config 00:02:20.243 gro: explicitly disabled via build config 00:02:20.243 gso: explicitly disabled via build config 00:02:20.243 ip_frag: explicitly disabled via build config 00:02:20.243 jobstats: explicitly disabled via build config 00:02:20.243 latencystats: explicitly disabled via build config 00:02:20.243 lpm: explicitly disabled via build config 00:02:20.243 member: explicitly disabled via build config 00:02:20.243 pcapng: explicitly disabled via build config 00:02:20.243 rawdev: explicitly disabled via build config 00:02:20.243 regexdev: explicitly disabled via build config 00:02:20.243 mldev: explicitly disabled via build config 00:02:20.243 rib: explicitly disabled via build config 00:02:20.243 sched: explicitly disabled via build config 00:02:20.243 stack: explicitly disabled via build config 00:02:20.243 ipsec: explicitly disabled via build config 00:02:20.243 pdcp: explicitly disabled via build config 00:02:20.243 fib: explicitly disabled via build config 00:02:20.243 port: explicitly disabled via build config 00:02:20.243 pdump: explicitly disabled via build config 00:02:20.243 table: explicitly disabled via build config 00:02:20.243 pipeline: explicitly disabled via build config 00:02:20.243 graph: explicitly disabled via build config 00:02:20.243 node: explicitly disabled via build config 00:02:20.243 00:02:20.243 drivers: 00:02:20.243 common/cpt: not in enabled drivers build config 00:02:20.243 common/dpaax: not in enabled drivers build config 00:02:20.243 common/iavf: not in enabled drivers build config 00:02:20.243 common/idpf: not in enabled drivers build config 00:02:20.243 common/ionic: not in enabled drivers build config 00:02:20.243 common/mvep: not in enabled drivers build config 00:02:20.243 common/octeontx: not in enabled drivers build config 00:02:20.243 bus/auxiliary: not in enabled drivers build config 00:02:20.243 bus/cdx: not in enabled drivers build config 00:02:20.244 bus/dpaa: not in enabled drivers build config 00:02:20.244 bus/fslmc: not in enabled drivers build config 00:02:20.244 bus/ifpga: not in enabled drivers build config 00:02:20.244 bus/platform: not in enabled drivers build config 00:02:20.244 bus/uacce: not in enabled drivers build config 00:02:20.244 bus/vmbus: not in enabled drivers build config 00:02:20.244 common/cnxk: not in enabled drivers build config 00:02:20.244 common/mlx5: not in enabled drivers build config 00:02:20.244 common/nfp: not in enabled drivers build config 00:02:20.244 common/nitrox: not in enabled drivers build config 00:02:20.244 common/qat: not in enabled drivers build config 00:02:20.244 common/sfc_efx: not in enabled drivers build config 00:02:20.244 mempool/bucket: not in enabled drivers build config 00:02:20.244 mempool/cnxk: not in enabled drivers build config 00:02:20.244 mempool/dpaa: not in enabled drivers build config 00:02:20.244 mempool/dpaa2: not in enabled drivers build config 00:02:20.244 mempool/octeontx: not in enabled drivers build config 00:02:20.244 mempool/stack: not in enabled drivers build config 00:02:20.244 dma/cnxk: not in enabled drivers build config 00:02:20.244 dma/dpaa: not in enabled drivers build config 00:02:20.244 dma/dpaa2: not in enabled drivers build config 00:02:20.244 dma/hisilicon: not in enabled drivers build config 00:02:20.244 dma/idxd: not in enabled drivers build config 00:02:20.244 dma/ioat: not in enabled drivers build config 00:02:20.244 dma/skeleton: not in enabled drivers build config 00:02:20.244 net/af_packet: not in enabled drivers build config 00:02:20.244 net/af_xdp: not in enabled drivers build config 00:02:20.244 net/ark: not in enabled drivers build config 00:02:20.244 net/atlantic: not in enabled drivers build config 00:02:20.244 net/avp: not in enabled drivers build config 00:02:20.244 net/axgbe: not in enabled drivers build config 00:02:20.244 net/bnx2x: not in enabled drivers build config 00:02:20.244 net/bnxt: not in enabled drivers build config 00:02:20.244 net/bonding: not in enabled drivers build config 00:02:20.244 net/cnxk: not in enabled drivers build config 00:02:20.244 net/cpfl: not in enabled drivers build config 00:02:20.244 net/cxgbe: not in enabled drivers build config 00:02:20.244 net/dpaa: not in enabled drivers build config 00:02:20.244 net/dpaa2: not in enabled drivers build config 00:02:20.244 net/e1000: not in enabled drivers build config 00:02:20.244 net/ena: not in enabled drivers build config 00:02:20.244 net/enetc: not in enabled drivers build config 00:02:20.244 net/enetfec: not in enabled drivers build config 00:02:20.244 net/enic: not in enabled drivers build config 00:02:20.244 net/failsafe: not in enabled drivers build config 00:02:20.244 net/fm10k: not in enabled drivers build config 00:02:20.244 net/gve: not in enabled drivers build config 00:02:20.244 net/hinic: not in enabled drivers build config 00:02:20.244 net/hns3: not in enabled drivers build config 00:02:20.244 net/i40e: not in enabled drivers build config 00:02:20.244 net/iavf: not in enabled drivers build config 00:02:20.244 net/ice: not in enabled drivers build config 00:02:20.244 net/idpf: not in enabled drivers build config 00:02:20.244 net/igc: not in enabled drivers build config 00:02:20.244 net/ionic: not in enabled drivers build config 00:02:20.244 net/ipn3ke: not in enabled drivers build config 00:02:20.244 net/ixgbe: not in enabled drivers build config 00:02:20.244 net/mana: not in enabled drivers build config 00:02:20.244 net/memif: not in enabled drivers build config 00:02:20.244 net/mlx4: not in enabled drivers build config 00:02:20.244 net/mlx5: not in enabled drivers build config 00:02:20.244 net/mvneta: not in enabled drivers build config 00:02:20.244 net/mvpp2: not in enabled drivers build config 00:02:20.244 net/netvsc: not in enabled drivers build config 00:02:20.244 net/nfb: not in enabled drivers build config 00:02:20.244 net/nfp: not in enabled drivers build config 00:02:20.244 net/ngbe: not in enabled drivers build config 00:02:20.244 net/null: not in enabled drivers build config 00:02:20.244 net/octeontx: not in enabled drivers build config 00:02:20.244 net/octeon_ep: not in enabled drivers build config 00:02:20.244 net/pcap: not in enabled drivers build config 00:02:20.244 net/pfe: not in enabled drivers build config 00:02:20.244 net/qede: not in enabled drivers build config 00:02:20.244 net/ring: not in enabled drivers build config 00:02:20.244 net/sfc: not in enabled drivers build config 00:02:20.244 net/softnic: not in enabled drivers build config 00:02:20.244 net/tap: not in enabled drivers build config 00:02:20.244 net/thunderx: not in enabled drivers build config 00:02:20.244 net/txgbe: not in enabled drivers build config 00:02:20.244 net/vdev_netvsc: not in enabled drivers build config 00:02:20.244 net/vhost: not in enabled drivers build config 00:02:20.244 net/virtio: not in enabled drivers build config 00:02:20.244 net/vmxnet3: not in enabled drivers build config 00:02:20.244 raw/*: missing internal dependency, "rawdev" 00:02:20.244 crypto/armv8: not in enabled drivers build config 00:02:20.244 crypto/bcmfs: not in enabled drivers build config 00:02:20.244 crypto/caam_jr: not in enabled drivers build config 00:02:20.244 crypto/ccp: not in enabled drivers build config 00:02:20.244 crypto/cnxk: not in enabled drivers build config 00:02:20.244 crypto/dpaa_sec: not in enabled drivers build config 00:02:20.244 crypto/dpaa2_sec: not in enabled drivers build config 00:02:20.244 crypto/ipsec_mb: not in enabled drivers build config 00:02:20.244 crypto/mlx5: not in enabled drivers build config 00:02:20.244 crypto/mvsam: not in enabled drivers build config 00:02:20.244 crypto/nitrox: not in enabled drivers build config 00:02:20.244 crypto/null: not in enabled drivers build config 00:02:20.244 crypto/octeontx: not in enabled drivers build config 00:02:20.244 crypto/openssl: not in enabled drivers build config 00:02:20.244 crypto/scheduler: not in enabled drivers build config 00:02:20.244 crypto/uadk: not in enabled drivers build config 00:02:20.244 crypto/virtio: not in enabled drivers build config 00:02:20.244 compress/isal: not in enabled drivers build config 00:02:20.244 compress/mlx5: not in enabled drivers build config 00:02:20.244 compress/nitrox: not in enabled drivers build config 00:02:20.244 compress/octeontx: not in enabled drivers build config 00:02:20.244 compress/zlib: not in enabled drivers build config 00:02:20.244 regex/*: missing internal dependency, "regexdev" 00:02:20.244 ml/*: missing internal dependency, "mldev" 00:02:20.244 vdpa/ifc: not in enabled drivers build config 00:02:20.244 vdpa/mlx5: not in enabled drivers build config 00:02:20.244 vdpa/nfp: not in enabled drivers build config 00:02:20.244 vdpa/sfc: not in enabled drivers build config 00:02:20.244 event/*: missing internal dependency, "eventdev" 00:02:20.244 baseband/*: missing internal dependency, "bbdev" 00:02:20.244 gpu/*: missing internal dependency, "gpudev" 00:02:20.244 00:02:20.244 00:02:20.503 Build targets in project: 85 00:02:20.503 00:02:20.503 DPDK 24.03.0 00:02:20.503 00:02:20.503 User defined options 00:02:20.503 buildtype : debug 00:02:20.503 default_library : shared 00:02:20.503 libdir : lib 00:02:20.503 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:20.503 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:20.503 c_link_args : 00:02:20.503 cpu_instruction_set: native 00:02:20.503 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:20.503 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:20.503 enable_docs : false 00:02:20.503 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:20.503 enable_kmods : false 00:02:20.503 max_lcores : 128 00:02:20.503 tests : false 00:02:20.503 00:02:20.503 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:20.763 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:21.034 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:21.034 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:21.034 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:21.034 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:21.034 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:21.034 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:21.034 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:21.034 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:21.034 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:21.034 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:21.034 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:21.034 [12/268] Linking static target lib/librte_kvargs.a 00:02:21.034 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:21.034 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:21.034 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:21.034 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:21.034 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:21.299 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:21.299 [19/268] Linking static target lib/librte_log.a 00:02:21.299 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:21.299 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:21.299 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:21.299 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:21.299 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:21.299 [25/268] Linking static target lib/librte_pci.a 00:02:21.299 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:21.299 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:21.299 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:21.299 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:21.299 [30/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:21.299 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:21.299 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:21.299 [33/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.560 [34/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.560 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.560 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:21.560 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:21.560 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:21.560 [39/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:21.560 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:21.560 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:21.560 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:21.560 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:21.560 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:21.560 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:21.560 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:21.560 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:21.560 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:21.560 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:21.560 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:21.560 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:21.560 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:21.560 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:21.560 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:21.560 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:21.560 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:21.560 [57/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:21.560 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:21.560 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:21.560 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:21.560 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:21.560 [62/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:21.560 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:21.560 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:21.560 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:21.560 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:21.561 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:21.561 [68/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:21.561 [69/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.561 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:21.561 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:21.561 [72/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:21.561 [73/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:21.561 [74/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.561 [75/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:21.561 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:21.561 [77/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:21.561 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:21.561 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:21.561 [80/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:21.561 [81/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:21.561 [82/268] Linking static target lib/librte_telemetry.a 00:02:21.561 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:21.561 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:21.561 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:21.561 [86/268] Linking static target lib/librte_meter.a 00:02:21.561 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:21.561 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:21.561 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:21.561 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:21.561 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:21.819 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:21.819 [93/268] Linking static target lib/librte_ring.a 00:02:21.819 [94/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:21.819 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:21.819 [96/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:21.819 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:21.819 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:21.819 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:21.819 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:21.819 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:21.819 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:21.819 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:21.819 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:21.819 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:21.819 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:21.819 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:21.819 [108/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:21.819 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:21.819 [110/268] Linking static target lib/librte_cmdline.a 00:02:21.819 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:21.819 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:21.819 [113/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:21.819 [114/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:21.819 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:21.819 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:21.819 [117/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:21.819 [118/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:21.819 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:21.819 [120/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:21.819 [121/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:21.819 [122/268] Linking static target lib/librte_net.a 00:02:21.819 [123/268] Linking static target lib/librte_mempool.a 00:02:21.819 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:21.819 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:21.819 [126/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:21.819 [127/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:21.819 [128/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:21.819 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:21.819 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:21.819 [131/268] Linking static target lib/librte_timer.a 00:02:21.820 [132/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:21.820 [133/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:21.820 [134/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:21.820 [135/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:21.820 [136/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:21.820 [137/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:21.820 [138/268] Linking static target lib/librte_dmadev.a 00:02:21.820 [139/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:21.820 [140/268] Linking static target lib/librte_eal.a 00:02:21.820 [141/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:21.820 [142/268] Linking static target lib/librte_rcu.a 00:02:21.820 [143/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:21.820 [144/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:21.820 [145/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:21.820 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:21.820 [147/268] Linking static target lib/librte_compressdev.a 00:02:21.820 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:21.820 [149/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:21.820 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:21.820 [151/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:21.820 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:21.820 [153/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:21.820 [154/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.078 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.078 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:22.078 [157/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.078 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:22.078 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:22.078 [160/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.078 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:22.078 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.078 [163/268] Linking target lib/librte_log.so.24.1 00:02:22.078 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:22.078 [165/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:22.078 [166/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:22.078 [167/268] Linking static target lib/librte_mbuf.a 00:02:22.078 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:22.078 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:22.078 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:22.078 [171/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:22.078 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:22.078 [173/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.078 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:22.078 [175/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:22.078 [176/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:22.078 [177/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:22.078 [178/268] Linking static target lib/librte_security.a 00:02:22.078 [179/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:22.078 [180/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:22.078 [181/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.078 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.078 [183/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:22.078 [184/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.078 [185/268] Linking static target lib/librte_reorder.a 00:02:22.078 [186/268] Linking static target lib/librte_power.a 00:02:22.078 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:22.078 [188/268] Linking static target lib/librte_hash.a 00:02:22.078 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:22.078 [190/268] Linking target lib/librte_kvargs.so.24.1 00:02:22.337 [191/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.337 [192/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:22.337 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.337 [194/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.337 [195/268] Linking static target lib/librte_cryptodev.a 00:02:22.337 [196/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:22.337 [197/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.337 [198/268] Linking target lib/librte_telemetry.so.24.1 00:02:22.337 [199/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.337 [200/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.337 [201/268] Linking static target drivers/librte_bus_pci.a 00:02:22.337 [202/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:22.337 [203/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:22.337 [204/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:22.337 [205/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.337 [206/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.337 [207/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:22.337 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:22.337 [209/268] Linking static target drivers/librte_bus_vdev.a 00:02:22.337 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.337 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.337 [212/268] Linking static target drivers/librte_mempool_ring.a 00:02:22.337 [213/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:22.596 [214/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.596 [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.596 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.854 [217/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.854 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.854 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:22.854 [220/268] Linking static target lib/librte_ethdev.a 00:02:22.854 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.854 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.854 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.113 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.113 [225/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.113 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.113 [227/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.051 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:24.051 [229/268] Linking static target lib/librte_vhost.a 00:02:24.310 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.287 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.858 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.765 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.765 [234/268] Linking target lib/librte_eal.so.24.1 00:02:35.023 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:35.023 [236/268] Linking target lib/librte_ring.so.24.1 00:02:35.023 [237/268] Linking target lib/librte_meter.so.24.1 00:02:35.023 [238/268] Linking target lib/librte_timer.so.24.1 00:02:35.023 [239/268] Linking target lib/librte_pci.so.24.1 00:02:35.023 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:35.023 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:35.023 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:35.023 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:35.023 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:35.023 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:35.023 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:35.282 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:35.282 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:35.282 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:35.282 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:35.282 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:35.282 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:35.282 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:35.541 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:35.541 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:35.541 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:35.541 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:35.541 [258/268] Linking target lib/librte_net.so.24.1 00:02:35.541 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:35.800 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:35.800 [261/268] Linking target lib/librte_security.so.24.1 00:02:35.800 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:35.800 [263/268] Linking target lib/librte_hash.so.24.1 00:02:35.800 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:35.800 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:35.800 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:36.059 [267/268] Linking target lib/librte_power.so.24.1 00:02:36.059 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:36.059 INFO: autodetecting backend as ninja 00:02:36.059 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:36.999 CC lib/ut/ut.o 00:02:36.999 CC lib/ut_mock/mock.o 00:02:36.999 CC lib/log/log.o 00:02:36.999 CC lib/log/log_deprecated.o 00:02:36.999 CC lib/log/log_flags.o 00:02:37.258 LIB libspdk_ut.a 00:02:37.258 LIB libspdk_ut_mock.a 00:02:37.258 SO libspdk_ut.so.2.0 00:02:37.258 LIB libspdk_log.a 00:02:37.258 SO libspdk_ut_mock.so.6.0 00:02:37.258 SYMLINK libspdk_ut.so 00:02:37.258 SO libspdk_log.so.7.0 00:02:37.258 SYMLINK libspdk_ut_mock.so 00:02:37.258 SYMLINK libspdk_log.so 00:02:37.827 CXX lib/trace_parser/trace.o 00:02:37.827 CC lib/util/base64.o 00:02:37.827 CC lib/util/bit_array.o 00:02:37.827 CC lib/util/cpuset.o 00:02:37.827 CC lib/util/crc32c.o 00:02:37.827 CC lib/util/crc16.o 00:02:37.827 CC lib/util/crc32.o 00:02:37.827 CC lib/util/crc32_ieee.o 00:02:37.828 CC lib/util/crc64.o 00:02:37.828 CC lib/ioat/ioat.o 00:02:37.828 CC lib/dma/dma.o 00:02:37.828 CC lib/util/dif.o 00:02:37.828 CC lib/util/fd.o 00:02:37.828 CC lib/util/fd_group.o 00:02:37.828 CC lib/util/file.o 00:02:37.828 CC lib/util/hexlify.o 00:02:37.828 CC lib/util/iov.o 00:02:37.828 CC lib/util/math.o 00:02:37.828 CC lib/util/strerror_tls.o 00:02:37.828 CC lib/util/net.o 00:02:37.828 CC lib/util/string.o 00:02:37.828 CC lib/util/pipe.o 00:02:37.828 CC lib/util/uuid.o 00:02:37.828 CC lib/util/xor.o 00:02:37.828 CC lib/util/zipf.o 00:02:37.828 CC lib/vfio_user/host/vfio_user.o 00:02:37.828 CC lib/vfio_user/host/vfio_user_pci.o 00:02:37.828 LIB libspdk_dma.a 00:02:38.087 SO libspdk_dma.so.4.0 00:02:38.087 LIB libspdk_ioat.a 00:02:38.087 SYMLINK libspdk_dma.so 00:02:38.087 SO libspdk_ioat.so.7.0 00:02:38.087 LIB libspdk_vfio_user.a 00:02:38.087 SYMLINK libspdk_ioat.so 00:02:38.087 SO libspdk_vfio_user.so.5.0 00:02:38.087 LIB libspdk_util.a 00:02:38.087 SYMLINK libspdk_vfio_user.so 00:02:38.347 SO libspdk_util.so.10.0 00:02:38.347 LIB libspdk_trace_parser.a 00:02:38.347 SYMLINK libspdk_util.so 00:02:38.347 SO libspdk_trace_parser.so.5.0 00:02:38.607 SYMLINK libspdk_trace_parser.so 00:02:38.607 CC lib/env_dpdk/env.o 00:02:38.607 CC lib/env_dpdk/pci.o 00:02:38.607 CC lib/env_dpdk/memory.o 00:02:38.607 CC lib/env_dpdk/init.o 00:02:38.607 CC lib/env_dpdk/threads.o 00:02:38.607 CC lib/env_dpdk/pci_ioat.o 00:02:38.607 CC lib/env_dpdk/pci_virtio.o 00:02:38.607 CC lib/env_dpdk/pci_vmd.o 00:02:38.607 CC lib/idxd/idxd.o 00:02:38.607 CC lib/env_dpdk/pci_event.o 00:02:38.607 CC lib/idxd/idxd_user.o 00:02:38.607 CC lib/env_dpdk/pci_idxd.o 00:02:38.607 CC lib/idxd/idxd_kernel.o 00:02:38.607 CC lib/env_dpdk/sigbus_handler.o 00:02:38.607 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:38.607 CC lib/env_dpdk/pci_dpdk.o 00:02:38.607 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:38.607 CC lib/json/json_parse.o 00:02:38.607 CC lib/rdma_utils/rdma_utils.o 00:02:38.607 CC lib/json/json_util.o 00:02:38.607 CC lib/json/json_write.o 00:02:38.866 CC lib/conf/conf.o 00:02:38.866 CC lib/rdma_provider/common.o 00:02:38.866 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:38.866 CC lib/vmd/vmd.o 00:02:38.866 CC lib/vmd/led.o 00:02:38.866 LIB libspdk_rdma_provider.a 00:02:38.866 LIB libspdk_conf.a 00:02:38.866 SO libspdk_rdma_provider.so.6.0 00:02:39.125 LIB libspdk_rdma_utils.a 00:02:39.125 SO libspdk_conf.so.6.0 00:02:39.125 LIB libspdk_json.a 00:02:39.125 SO libspdk_rdma_utils.so.1.0 00:02:39.125 SO libspdk_json.so.6.0 00:02:39.125 SYMLINK libspdk_rdma_provider.so 00:02:39.125 SYMLINK libspdk_conf.so 00:02:39.125 SYMLINK libspdk_rdma_utils.so 00:02:39.125 SYMLINK libspdk_json.so 00:02:39.125 LIB libspdk_idxd.a 00:02:39.125 SO libspdk_idxd.so.12.0 00:02:39.125 LIB libspdk_vmd.a 00:02:39.384 SO libspdk_vmd.so.6.0 00:02:39.384 SYMLINK libspdk_idxd.so 00:02:39.384 SYMLINK libspdk_vmd.so 00:02:39.384 CC lib/jsonrpc/jsonrpc_server.o 00:02:39.384 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:39.384 CC lib/jsonrpc/jsonrpc_client.o 00:02:39.384 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:39.644 LIB libspdk_jsonrpc.a 00:02:39.644 LIB libspdk_env_dpdk.a 00:02:39.644 SO libspdk_jsonrpc.so.6.0 00:02:39.902 SO libspdk_env_dpdk.so.15.0 00:02:39.902 SYMLINK libspdk_jsonrpc.so 00:02:39.902 SYMLINK libspdk_env_dpdk.so 00:02:40.161 CC lib/rpc/rpc.o 00:02:40.420 LIB libspdk_rpc.a 00:02:40.420 SO libspdk_rpc.so.6.0 00:02:40.420 SYMLINK libspdk_rpc.so 00:02:40.679 CC lib/trace/trace.o 00:02:40.679 CC lib/trace/trace_flags.o 00:02:40.679 CC lib/trace/trace_rpc.o 00:02:40.679 CC lib/notify/notify.o 00:02:40.679 CC lib/notify/notify_rpc.o 00:02:40.679 CC lib/keyring/keyring.o 00:02:40.679 CC lib/keyring/keyring_rpc.o 00:02:40.938 LIB libspdk_notify.a 00:02:40.938 SO libspdk_notify.so.6.0 00:02:40.938 LIB libspdk_trace.a 00:02:40.938 LIB libspdk_keyring.a 00:02:40.938 SYMLINK libspdk_notify.so 00:02:40.938 SO libspdk_trace.so.10.0 00:02:40.938 SO libspdk_keyring.so.1.0 00:02:41.197 SYMLINK libspdk_trace.so 00:02:41.197 SYMLINK libspdk_keyring.so 00:02:41.455 CC lib/sock/sock.o 00:02:41.455 CC lib/sock/sock_rpc.o 00:02:41.455 CC lib/thread/thread.o 00:02:41.455 CC lib/thread/iobuf.o 00:02:41.714 LIB libspdk_sock.a 00:02:41.714 SO libspdk_sock.so.10.0 00:02:41.973 SYMLINK libspdk_sock.so 00:02:42.233 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:42.233 CC lib/nvme/nvme_ctrlr.o 00:02:42.233 CC lib/nvme/nvme_ns_cmd.o 00:02:42.233 CC lib/nvme/nvme_fabric.o 00:02:42.233 CC lib/nvme/nvme_ns.o 00:02:42.233 CC lib/nvme/nvme_pcie_common.o 00:02:42.233 CC lib/nvme/nvme_pcie.o 00:02:42.233 CC lib/nvme/nvme_qpair.o 00:02:42.233 CC lib/nvme/nvme.o 00:02:42.233 CC lib/nvme/nvme_quirks.o 00:02:42.233 CC lib/nvme/nvme_transport.o 00:02:42.233 CC lib/nvme/nvme_discovery.o 00:02:42.233 CC lib/nvme/nvme_tcp.o 00:02:42.233 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:42.233 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:42.233 CC lib/nvme/nvme_opal.o 00:02:42.233 CC lib/nvme/nvme_io_msg.o 00:02:42.233 CC lib/nvme/nvme_stubs.o 00:02:42.233 CC lib/nvme/nvme_poll_group.o 00:02:42.233 CC lib/nvme/nvme_zns.o 00:02:42.233 CC lib/nvme/nvme_auth.o 00:02:42.233 CC lib/nvme/nvme_cuse.o 00:02:42.233 CC lib/nvme/nvme_vfio_user.o 00:02:42.233 CC lib/nvme/nvme_rdma.o 00:02:42.493 LIB libspdk_thread.a 00:02:42.493 SO libspdk_thread.so.10.1 00:02:42.493 SYMLINK libspdk_thread.so 00:02:42.829 CC lib/blob/blobstore.o 00:02:42.829 CC lib/blob/request.o 00:02:42.829 CC lib/blob/zeroes.o 00:02:42.829 CC lib/blob/blob_bs_dev.o 00:02:42.829 CC lib/init/json_config.o 00:02:42.829 CC lib/vfu_tgt/tgt_rpc.o 00:02:42.829 CC lib/vfu_tgt/tgt_endpoint.o 00:02:42.829 CC lib/init/subsystem.o 00:02:42.829 CC lib/init/subsystem_rpc.o 00:02:42.829 CC lib/init/rpc.o 00:02:42.829 CC lib/accel/accel.o 00:02:42.829 CC lib/accel/accel_rpc.o 00:02:42.829 CC lib/accel/accel_sw.o 00:02:42.829 CC lib/virtio/virtio_vhost_user.o 00:02:42.829 CC lib/virtio/virtio.o 00:02:42.829 CC lib/virtio/virtio_vfio_user.o 00:02:42.829 CC lib/virtio/virtio_pci.o 00:02:43.119 LIB libspdk_init.a 00:02:43.119 SO libspdk_init.so.5.0 00:02:43.119 LIB libspdk_vfu_tgt.a 00:02:43.119 SYMLINK libspdk_init.so 00:02:43.119 SO libspdk_vfu_tgt.so.3.0 00:02:43.119 LIB libspdk_virtio.a 00:02:43.378 SO libspdk_virtio.so.7.0 00:02:43.378 SYMLINK libspdk_vfu_tgt.so 00:02:43.378 SYMLINK libspdk_virtio.so 00:02:43.637 CC lib/event/app.o 00:02:43.637 CC lib/event/app_rpc.o 00:02:43.637 CC lib/event/reactor.o 00:02:43.637 CC lib/event/log_rpc.o 00:02:43.637 CC lib/event/scheduler_static.o 00:02:43.637 LIB libspdk_accel.a 00:02:43.637 SO libspdk_accel.so.16.0 00:02:43.637 SYMLINK libspdk_accel.so 00:02:43.896 LIB libspdk_nvme.a 00:02:43.896 LIB libspdk_event.a 00:02:43.896 SO libspdk_nvme.so.13.1 00:02:43.896 SO libspdk_event.so.14.0 00:02:43.896 SYMLINK libspdk_event.so 00:02:44.156 CC lib/bdev/bdev.o 00:02:44.156 CC lib/bdev/part.o 00:02:44.156 CC lib/bdev/bdev_rpc.o 00:02:44.156 CC lib/bdev/scsi_nvme.o 00:02:44.156 CC lib/bdev/bdev_zone.o 00:02:44.156 SYMLINK libspdk_nvme.so 00:02:45.094 LIB libspdk_blob.a 00:02:45.094 SO libspdk_blob.so.11.0 00:02:45.094 SYMLINK libspdk_blob.so 00:02:45.353 CC lib/lvol/lvol.o 00:02:45.353 CC lib/blobfs/blobfs.o 00:02:45.353 CC lib/blobfs/tree.o 00:02:45.920 LIB libspdk_bdev.a 00:02:45.920 SO libspdk_bdev.so.16.0 00:02:45.920 SYMLINK libspdk_bdev.so 00:02:45.920 LIB libspdk_blobfs.a 00:02:46.177 SO libspdk_blobfs.so.10.0 00:02:46.177 LIB libspdk_lvol.a 00:02:46.177 SO libspdk_lvol.so.10.0 00:02:46.177 SYMLINK libspdk_blobfs.so 00:02:46.177 SYMLINK libspdk_lvol.so 00:02:46.177 CC lib/scsi/dev.o 00:02:46.436 CC lib/scsi/lun.o 00:02:46.436 CC lib/scsi/port.o 00:02:46.436 CC lib/scsi/scsi.o 00:02:46.436 CC lib/scsi/scsi_bdev.o 00:02:46.436 CC lib/scsi/scsi_pr.o 00:02:46.436 CC lib/scsi/scsi_rpc.o 00:02:46.436 CC lib/scsi/task.o 00:02:46.436 CC lib/nbd/nbd.o 00:02:46.436 CC lib/ublk/ublk.o 00:02:46.436 CC lib/ublk/ublk_rpc.o 00:02:46.436 CC lib/nbd/nbd_rpc.o 00:02:46.436 CC lib/nvmf/ctrlr.o 00:02:46.436 CC lib/nvmf/ctrlr_discovery.o 00:02:46.436 CC lib/nvmf/ctrlr_bdev.o 00:02:46.436 CC lib/nvmf/nvmf.o 00:02:46.436 CC lib/nvmf/subsystem.o 00:02:46.436 CC lib/nvmf/tcp.o 00:02:46.436 CC lib/nvmf/nvmf_rpc.o 00:02:46.436 CC lib/nvmf/transport.o 00:02:46.436 CC lib/nvmf/vfio_user.o 00:02:46.436 CC lib/nvmf/stubs.o 00:02:46.436 CC lib/ftl/ftl_core.o 00:02:46.436 CC lib/nvmf/mdns_server.o 00:02:46.436 CC lib/ftl/ftl_init.o 00:02:46.436 CC lib/ftl/ftl_layout.o 00:02:46.436 CC lib/nvmf/rdma.o 00:02:46.436 CC lib/nvmf/auth.o 00:02:46.436 CC lib/ftl/ftl_debug.o 00:02:46.436 CC lib/ftl/ftl_io.o 00:02:46.436 CC lib/ftl/ftl_sb.o 00:02:46.436 CC lib/ftl/ftl_l2p.o 00:02:46.436 CC lib/ftl/ftl_l2p_flat.o 00:02:46.436 CC lib/ftl/ftl_nv_cache.o 00:02:46.436 CC lib/ftl/ftl_band.o 00:02:46.436 CC lib/ftl/ftl_rq.o 00:02:46.436 CC lib/ftl/ftl_band_ops.o 00:02:46.436 CC lib/ftl/ftl_writer.o 00:02:46.436 CC lib/ftl/ftl_reloc.o 00:02:46.436 CC lib/ftl/mngt/ftl_mngt.o 00:02:46.436 CC lib/ftl/ftl_l2p_cache.o 00:02:46.436 CC lib/ftl/ftl_p2l.o 00:02:46.436 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:46.436 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:46.436 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:46.436 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:46.436 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:46.436 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:46.436 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:46.436 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:46.436 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:46.436 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:46.436 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:46.436 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:46.436 CC lib/ftl/utils/ftl_conf.o 00:02:46.436 CC lib/ftl/utils/ftl_md.o 00:02:46.436 CC lib/ftl/utils/ftl_mempool.o 00:02:46.436 CC lib/ftl/utils/ftl_property.o 00:02:46.436 CC lib/ftl/utils/ftl_bitmap.o 00:02:46.436 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:46.436 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:46.436 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:46.436 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:46.436 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:46.436 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:46.436 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:46.436 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:46.436 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:46.436 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:46.436 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:46.436 CC lib/ftl/base/ftl_base_dev.o 00:02:46.436 CC lib/ftl/base/ftl_base_bdev.o 00:02:46.436 CC lib/ftl/ftl_trace.o 00:02:47.002 LIB libspdk_nbd.a 00:02:47.002 SO libspdk_nbd.so.7.0 00:02:47.002 LIB libspdk_scsi.a 00:02:47.002 SYMLINK libspdk_nbd.so 00:02:47.002 SO libspdk_scsi.so.9.0 00:02:47.002 LIB libspdk_ublk.a 00:02:47.002 SO libspdk_ublk.so.3.0 00:02:47.002 SYMLINK libspdk_scsi.so 00:02:47.002 SYMLINK libspdk_ublk.so 00:02:47.260 LIB libspdk_ftl.a 00:02:47.517 CC lib/iscsi/init_grp.o 00:02:47.517 CC lib/iscsi/conn.o 00:02:47.517 CC lib/iscsi/iscsi.o 00:02:47.517 CC lib/vhost/vhost.o 00:02:47.517 CC lib/iscsi/md5.o 00:02:47.517 CC lib/vhost/vhost_rpc.o 00:02:47.517 CC lib/iscsi/param.o 00:02:47.517 CC lib/vhost/vhost_scsi.o 00:02:47.517 CC lib/iscsi/portal_grp.o 00:02:47.517 CC lib/iscsi/tgt_node.o 00:02:47.517 CC lib/vhost/vhost_blk.o 00:02:47.517 CC lib/iscsi/iscsi_subsystem.o 00:02:47.517 CC lib/vhost/rte_vhost_user.o 00:02:47.517 CC lib/iscsi/iscsi_rpc.o 00:02:47.517 CC lib/iscsi/task.o 00:02:47.517 SO libspdk_ftl.so.9.0 00:02:47.775 SYMLINK libspdk_ftl.so 00:02:48.033 LIB libspdk_nvmf.a 00:02:48.291 SO libspdk_nvmf.so.19.0 00:02:48.291 LIB libspdk_vhost.a 00:02:48.291 SO libspdk_vhost.so.8.0 00:02:48.291 SYMLINK libspdk_vhost.so 00:02:48.291 LIB libspdk_iscsi.a 00:02:48.291 SYMLINK libspdk_nvmf.so 00:02:48.550 SO libspdk_iscsi.so.8.0 00:02:48.550 SYMLINK libspdk_iscsi.so 00:02:49.115 CC module/env_dpdk/env_dpdk_rpc.o 00:02:49.115 CC module/vfu_device/vfu_virtio.o 00:02:49.115 CC module/vfu_device/vfu_virtio_blk.o 00:02:49.115 CC module/vfu_device/vfu_virtio_scsi.o 00:02:49.115 CC module/vfu_device/vfu_virtio_rpc.o 00:02:49.374 LIB libspdk_env_dpdk_rpc.a 00:02:49.374 CC module/accel/error/accel_error.o 00:02:49.374 CC module/sock/posix/posix.o 00:02:49.374 CC module/accel/dsa/accel_dsa.o 00:02:49.374 CC module/accel/dsa/accel_dsa_rpc.o 00:02:49.374 CC module/accel/iaa/accel_iaa.o 00:02:49.374 CC module/accel/error/accel_error_rpc.o 00:02:49.374 CC module/accel/iaa/accel_iaa_rpc.o 00:02:49.374 CC module/keyring/file/keyring.o 00:02:49.374 CC module/keyring/file/keyring_rpc.o 00:02:49.374 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:49.374 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:49.374 CC module/scheduler/gscheduler/gscheduler.o 00:02:49.374 SO libspdk_env_dpdk_rpc.so.6.0 00:02:49.374 CC module/keyring/linux/keyring.o 00:02:49.374 CC module/keyring/linux/keyring_rpc.o 00:02:49.374 CC module/accel/ioat/accel_ioat.o 00:02:49.374 CC module/accel/ioat/accel_ioat_rpc.o 00:02:49.374 CC module/blob/bdev/blob_bdev.o 00:02:49.374 SYMLINK libspdk_env_dpdk_rpc.so 00:02:49.374 LIB libspdk_keyring_file.a 00:02:49.374 LIB libspdk_scheduler_gscheduler.a 00:02:49.374 LIB libspdk_keyring_linux.a 00:02:49.374 LIB libspdk_scheduler_dpdk_governor.a 00:02:49.374 LIB libspdk_accel_error.a 00:02:49.374 SO libspdk_keyring_file.so.1.0 00:02:49.374 LIB libspdk_accel_iaa.a 00:02:49.374 LIB libspdk_accel_ioat.a 00:02:49.374 SO libspdk_scheduler_gscheduler.so.4.0 00:02:49.374 LIB libspdk_scheduler_dynamic.a 00:02:49.374 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:49.374 SO libspdk_keyring_linux.so.1.0 00:02:49.374 SO libspdk_accel_error.so.2.0 00:02:49.632 SO libspdk_accel_iaa.so.3.0 00:02:49.632 SO libspdk_accel_ioat.so.6.0 00:02:49.632 LIB libspdk_accel_dsa.a 00:02:49.632 SO libspdk_scheduler_dynamic.so.4.0 00:02:49.632 SYMLINK libspdk_scheduler_gscheduler.so 00:02:49.632 SYMLINK libspdk_keyring_file.so 00:02:49.632 LIB libspdk_blob_bdev.a 00:02:49.632 SYMLINK libspdk_keyring_linux.so 00:02:49.632 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:49.632 SYMLINK libspdk_accel_error.so 00:02:49.632 SO libspdk_accel_dsa.so.5.0 00:02:49.632 SYMLINK libspdk_accel_ioat.so 00:02:49.632 SYMLINK libspdk_accel_iaa.so 00:02:49.632 SYMLINK libspdk_scheduler_dynamic.so 00:02:49.632 SO libspdk_blob_bdev.so.11.0 00:02:49.632 LIB libspdk_vfu_device.a 00:02:49.632 SYMLINK libspdk_accel_dsa.so 00:02:49.632 SYMLINK libspdk_blob_bdev.so 00:02:49.632 SO libspdk_vfu_device.so.3.0 00:02:49.632 SYMLINK libspdk_vfu_device.so 00:02:49.889 LIB libspdk_sock_posix.a 00:02:49.889 SO libspdk_sock_posix.so.6.0 00:02:49.889 SYMLINK libspdk_sock_posix.so 00:02:50.147 CC module/bdev/gpt/gpt.o 00:02:50.147 CC module/bdev/delay/vbdev_delay.o 00:02:50.147 CC module/bdev/gpt/vbdev_gpt.o 00:02:50.147 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:50.147 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:50.147 CC module/blobfs/bdev/blobfs_bdev.o 00:02:50.147 CC module/bdev/malloc/bdev_malloc.o 00:02:50.147 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:50.147 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:50.147 CC module/bdev/error/vbdev_error.o 00:02:50.147 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:50.147 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:50.147 CC module/bdev/passthru/vbdev_passthru.o 00:02:50.147 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:50.147 CC module/bdev/lvol/vbdev_lvol.o 00:02:50.147 CC module/bdev/error/vbdev_error_rpc.o 00:02:50.147 CC module/bdev/nvme/bdev_nvme.o 00:02:50.147 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:50.147 CC module/bdev/nvme/nvme_rpc.o 00:02:50.147 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:50.147 CC module/bdev/nvme/bdev_mdns_client.o 00:02:50.147 CC module/bdev/nvme/vbdev_opal.o 00:02:50.147 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:50.147 CC module/bdev/aio/bdev_aio.o 00:02:50.147 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:50.147 CC module/bdev/iscsi/bdev_iscsi.o 00:02:50.147 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:50.147 CC module/bdev/aio/bdev_aio_rpc.o 00:02:50.147 CC module/bdev/split/vbdev_split.o 00:02:50.147 CC module/bdev/null/bdev_null_rpc.o 00:02:50.147 CC module/bdev/null/bdev_null.o 00:02:50.147 CC module/bdev/split/vbdev_split_rpc.o 00:02:50.147 CC module/bdev/raid/bdev_raid_rpc.o 00:02:50.147 CC module/bdev/raid/bdev_raid.o 00:02:50.147 CC module/bdev/raid/bdev_raid_sb.o 00:02:50.147 CC module/bdev/raid/raid0.o 00:02:50.147 CC module/bdev/raid/raid1.o 00:02:50.147 CC module/bdev/raid/concat.o 00:02:50.147 CC module/bdev/ftl/bdev_ftl.o 00:02:50.147 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:50.147 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:50.147 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:50.405 LIB libspdk_blobfs_bdev.a 00:02:50.405 SO libspdk_blobfs_bdev.so.6.0 00:02:50.405 LIB libspdk_bdev_gpt.a 00:02:50.405 LIB libspdk_bdev_error.a 00:02:50.405 LIB libspdk_bdev_split.a 00:02:50.405 SO libspdk_bdev_gpt.so.6.0 00:02:50.405 LIB libspdk_bdev_null.a 00:02:50.405 SYMLINK libspdk_blobfs_bdev.so 00:02:50.405 SO libspdk_bdev_error.so.6.0 00:02:50.405 LIB libspdk_bdev_passthru.a 00:02:50.405 SO libspdk_bdev_split.so.6.0 00:02:50.405 LIB libspdk_bdev_ftl.a 00:02:50.663 LIB libspdk_bdev_malloc.a 00:02:50.663 SO libspdk_bdev_null.so.6.0 00:02:50.663 LIB libspdk_bdev_delay.a 00:02:50.663 LIB libspdk_bdev_zone_block.a 00:02:50.663 SYMLINK libspdk_bdev_gpt.so 00:02:50.663 LIB libspdk_bdev_aio.a 00:02:50.663 SO libspdk_bdev_passthru.so.6.0 00:02:50.663 SO libspdk_bdev_ftl.so.6.0 00:02:50.663 SYMLINK libspdk_bdev_error.so 00:02:50.663 SYMLINK libspdk_bdev_split.so 00:02:50.663 SO libspdk_bdev_malloc.so.6.0 00:02:50.663 SO libspdk_bdev_delay.so.6.0 00:02:50.663 LIB libspdk_bdev_iscsi.a 00:02:50.663 SO libspdk_bdev_zone_block.so.6.0 00:02:50.663 SO libspdk_bdev_aio.so.6.0 00:02:50.663 SYMLINK libspdk_bdev_null.so 00:02:50.663 SO libspdk_bdev_iscsi.so.6.0 00:02:50.663 SYMLINK libspdk_bdev_ftl.so 00:02:50.663 SYMLINK libspdk_bdev_passthru.so 00:02:50.663 SYMLINK libspdk_bdev_malloc.so 00:02:50.663 SYMLINK libspdk_bdev_delay.so 00:02:50.663 SYMLINK libspdk_bdev_aio.so 00:02:50.663 SYMLINK libspdk_bdev_zone_block.so 00:02:50.663 LIB libspdk_bdev_lvol.a 00:02:50.663 LIB libspdk_bdev_virtio.a 00:02:50.663 SYMLINK libspdk_bdev_iscsi.so 00:02:50.663 SO libspdk_bdev_lvol.so.6.0 00:02:50.663 SO libspdk_bdev_virtio.so.6.0 00:02:50.663 SYMLINK libspdk_bdev_lvol.so 00:02:50.663 SYMLINK libspdk_bdev_virtio.so 00:02:50.920 LIB libspdk_bdev_raid.a 00:02:50.920 SO libspdk_bdev_raid.so.6.0 00:02:51.178 SYMLINK libspdk_bdev_raid.so 00:02:51.746 LIB libspdk_bdev_nvme.a 00:02:51.746 SO libspdk_bdev_nvme.so.7.0 00:02:52.009 SYMLINK libspdk_bdev_nvme.so 00:02:52.573 CC module/event/subsystems/keyring/keyring.o 00:02:52.573 CC module/event/subsystems/scheduler/scheduler.o 00:02:52.573 CC module/event/subsystems/vmd/vmd.o 00:02:52.573 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:52.573 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:52.573 CC module/event/subsystems/iobuf/iobuf.o 00:02:52.573 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:52.831 CC module/event/subsystems/sock/sock.o 00:02:52.831 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:52.831 LIB libspdk_event_keyring.a 00:02:52.831 LIB libspdk_event_vhost_blk.a 00:02:52.831 LIB libspdk_event_scheduler.a 00:02:52.831 LIB libspdk_event_vmd.a 00:02:52.831 LIB libspdk_event_iobuf.a 00:02:52.831 SO libspdk_event_keyring.so.1.0 00:02:52.831 LIB libspdk_event_vfu_tgt.a 00:02:52.831 SO libspdk_event_vhost_blk.so.3.0 00:02:52.831 LIB libspdk_event_sock.a 00:02:52.831 SO libspdk_event_scheduler.so.4.0 00:02:52.831 SO libspdk_event_vfu_tgt.so.3.0 00:02:52.831 SO libspdk_event_vmd.so.6.0 00:02:52.831 SO libspdk_event_iobuf.so.3.0 00:02:52.831 SO libspdk_event_sock.so.5.0 00:02:52.831 SYMLINK libspdk_event_keyring.so 00:02:52.831 SYMLINK libspdk_event_vhost_blk.so 00:02:52.831 SYMLINK libspdk_event_scheduler.so 00:02:52.831 SYMLINK libspdk_event_vfu_tgt.so 00:02:53.089 SYMLINK libspdk_event_vmd.so 00:02:53.089 SYMLINK libspdk_event_iobuf.so 00:02:53.089 SYMLINK libspdk_event_sock.so 00:02:53.347 CC module/event/subsystems/accel/accel.o 00:02:53.347 LIB libspdk_event_accel.a 00:02:53.604 SO libspdk_event_accel.so.6.0 00:02:53.604 SYMLINK libspdk_event_accel.so 00:02:53.863 CC module/event/subsystems/bdev/bdev.o 00:02:54.121 LIB libspdk_event_bdev.a 00:02:54.121 SO libspdk_event_bdev.so.6.0 00:02:54.121 SYMLINK libspdk_event_bdev.so 00:02:54.687 CC module/event/subsystems/scsi/scsi.o 00:02:54.687 CC module/event/subsystems/nbd/nbd.o 00:02:54.687 CC module/event/subsystems/ublk/ublk.o 00:02:54.687 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:54.687 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:54.687 LIB libspdk_event_nbd.a 00:02:54.687 LIB libspdk_event_scsi.a 00:02:54.687 LIB libspdk_event_ublk.a 00:02:54.687 SO libspdk_event_nbd.so.6.0 00:02:54.687 SO libspdk_event_scsi.so.6.0 00:02:54.687 SO libspdk_event_ublk.so.3.0 00:02:54.687 LIB libspdk_event_nvmf.a 00:02:54.687 SYMLINK libspdk_event_nbd.so 00:02:54.687 SYMLINK libspdk_event_scsi.so 00:02:54.945 SYMLINK libspdk_event_ublk.so 00:02:54.945 SO libspdk_event_nvmf.so.6.0 00:02:54.945 SYMLINK libspdk_event_nvmf.so 00:02:55.202 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:55.202 CC module/event/subsystems/iscsi/iscsi.o 00:02:55.202 LIB libspdk_event_vhost_scsi.a 00:02:55.460 LIB libspdk_event_iscsi.a 00:02:55.460 SO libspdk_event_vhost_scsi.so.3.0 00:02:55.460 SO libspdk_event_iscsi.so.6.0 00:02:55.460 SYMLINK libspdk_event_vhost_scsi.so 00:02:55.460 SYMLINK libspdk_event_iscsi.so 00:02:55.717 SO libspdk.so.6.0 00:02:55.717 SYMLINK libspdk.so 00:02:55.975 CC app/trace_record/trace_record.o 00:02:55.975 CC app/spdk_top/spdk_top.o 00:02:55.975 CXX app/trace/trace.o 00:02:55.975 CC app/spdk_nvme_perf/perf.o 00:02:55.975 TEST_HEADER include/spdk/accel.h 00:02:55.975 CC app/spdk_nvme_discover/discovery_aer.o 00:02:55.975 TEST_HEADER include/spdk/accel_module.h 00:02:55.975 TEST_HEADER include/spdk/assert.h 00:02:55.975 TEST_HEADER include/spdk/base64.h 00:02:55.975 TEST_HEADER include/spdk/barrier.h 00:02:55.975 TEST_HEADER include/spdk/bdev.h 00:02:55.975 CC test/rpc_client/rpc_client_test.o 00:02:55.975 TEST_HEADER include/spdk/bdev_zone.h 00:02:55.975 TEST_HEADER include/spdk/bdev_module.h 00:02:55.975 CC app/spdk_lspci/spdk_lspci.o 00:02:55.975 CC app/spdk_nvme_identify/identify.o 00:02:55.975 TEST_HEADER include/spdk/bit_array.h 00:02:55.975 TEST_HEADER include/spdk/bit_pool.h 00:02:55.975 TEST_HEADER include/spdk/blob_bdev.h 00:02:55.975 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:55.975 TEST_HEADER include/spdk/blobfs.h 00:02:55.975 TEST_HEADER include/spdk/blob.h 00:02:55.975 TEST_HEADER include/spdk/conf.h 00:02:55.975 TEST_HEADER include/spdk/cpuset.h 00:02:55.975 TEST_HEADER include/spdk/config.h 00:02:55.975 TEST_HEADER include/spdk/crc16.h 00:02:55.975 TEST_HEADER include/spdk/crc32.h 00:02:55.975 TEST_HEADER include/spdk/dif.h 00:02:55.975 TEST_HEADER include/spdk/crc64.h 00:02:55.975 TEST_HEADER include/spdk/dma.h 00:02:55.975 TEST_HEADER include/spdk/endian.h 00:02:55.975 TEST_HEADER include/spdk/env.h 00:02:55.975 TEST_HEADER include/spdk/env_dpdk.h 00:02:55.975 TEST_HEADER include/spdk/event.h 00:02:55.975 TEST_HEADER include/spdk/fd.h 00:02:55.975 TEST_HEADER include/spdk/fd_group.h 00:02:55.975 TEST_HEADER include/spdk/ftl.h 00:02:55.975 TEST_HEADER include/spdk/file.h 00:02:55.975 TEST_HEADER include/spdk/histogram_data.h 00:02:55.975 TEST_HEADER include/spdk/hexlify.h 00:02:55.975 TEST_HEADER include/spdk/idxd_spec.h 00:02:55.975 TEST_HEADER include/spdk/gpt_spec.h 00:02:55.975 TEST_HEADER include/spdk/init.h 00:02:55.975 TEST_HEADER include/spdk/idxd.h 00:02:55.975 TEST_HEADER include/spdk/ioat.h 00:02:55.976 TEST_HEADER include/spdk/ioat_spec.h 00:02:55.976 CC app/nvmf_tgt/nvmf_main.o 00:02:55.976 TEST_HEADER include/spdk/jsonrpc.h 00:02:55.976 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:55.976 TEST_HEADER include/spdk/iscsi_spec.h 00:02:55.976 TEST_HEADER include/spdk/json.h 00:02:55.976 TEST_HEADER include/spdk/keyring_module.h 00:02:55.976 TEST_HEADER include/spdk/likely.h 00:02:55.976 TEST_HEADER include/spdk/log.h 00:02:55.976 TEST_HEADER include/spdk/keyring.h 00:02:55.976 CC app/iscsi_tgt/iscsi_tgt.o 00:02:55.976 TEST_HEADER include/spdk/lvol.h 00:02:55.976 TEST_HEADER include/spdk/mmio.h 00:02:55.976 TEST_HEADER include/spdk/memory.h 00:02:55.976 TEST_HEADER include/spdk/net.h 00:02:55.976 TEST_HEADER include/spdk/nbd.h 00:02:55.976 TEST_HEADER include/spdk/notify.h 00:02:55.976 TEST_HEADER include/spdk/nvme.h 00:02:55.976 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:55.976 TEST_HEADER include/spdk/nvme_intel.h 00:02:55.976 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:55.976 TEST_HEADER include/spdk/nvme_spec.h 00:02:55.976 TEST_HEADER include/spdk/nvme_zns.h 00:02:55.976 CC app/spdk_dd/spdk_dd.o 00:02:55.976 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:55.976 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:55.976 TEST_HEADER include/spdk/nvmf_spec.h 00:02:55.976 TEST_HEADER include/spdk/nvmf.h 00:02:55.976 TEST_HEADER include/spdk/opal_spec.h 00:02:55.976 TEST_HEADER include/spdk/nvmf_transport.h 00:02:55.976 TEST_HEADER include/spdk/opal.h 00:02:55.976 TEST_HEADER include/spdk/pipe.h 00:02:55.976 TEST_HEADER include/spdk/pci_ids.h 00:02:55.976 TEST_HEADER include/spdk/queue.h 00:02:55.976 TEST_HEADER include/spdk/reduce.h 00:02:55.976 CC app/spdk_tgt/spdk_tgt.o 00:02:55.976 TEST_HEADER include/spdk/rpc.h 00:02:55.976 TEST_HEADER include/spdk/scheduler.h 00:02:55.976 TEST_HEADER include/spdk/scsi_spec.h 00:02:55.976 TEST_HEADER include/spdk/scsi.h 00:02:55.976 TEST_HEADER include/spdk/string.h 00:02:55.976 TEST_HEADER include/spdk/sock.h 00:02:55.976 TEST_HEADER include/spdk/stdinc.h 00:02:55.976 TEST_HEADER include/spdk/trace_parser.h 00:02:55.976 TEST_HEADER include/spdk/tree.h 00:02:55.976 TEST_HEADER include/spdk/thread.h 00:02:55.976 TEST_HEADER include/spdk/trace.h 00:02:55.976 TEST_HEADER include/spdk/util.h 00:02:55.976 TEST_HEADER include/spdk/ublk.h 00:02:55.976 TEST_HEADER include/spdk/uuid.h 00:02:55.976 TEST_HEADER include/spdk/version.h 00:02:55.976 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:55.976 TEST_HEADER include/spdk/vhost.h 00:02:55.976 TEST_HEADER include/spdk/vmd.h 00:02:55.976 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:55.976 TEST_HEADER include/spdk/xor.h 00:02:55.976 TEST_HEADER include/spdk/zipf.h 00:02:56.247 CXX test/cpp_headers/accel.o 00:02:56.247 CXX test/cpp_headers/assert.o 00:02:56.247 CXX test/cpp_headers/accel_module.o 00:02:56.247 CXX test/cpp_headers/base64.o 00:02:56.247 CXX test/cpp_headers/barrier.o 00:02:56.247 CXX test/cpp_headers/bdev_module.o 00:02:56.247 CXX test/cpp_headers/bdev.o 00:02:56.247 CXX test/cpp_headers/bdev_zone.o 00:02:56.247 CXX test/cpp_headers/bit_array.o 00:02:56.247 CXX test/cpp_headers/bit_pool.o 00:02:56.247 CXX test/cpp_headers/blob_bdev.o 00:02:56.247 CXX test/cpp_headers/blobfs_bdev.o 00:02:56.247 CXX test/cpp_headers/config.o 00:02:56.247 CXX test/cpp_headers/conf.o 00:02:56.247 CXX test/cpp_headers/blobfs.o 00:02:56.247 CXX test/cpp_headers/cpuset.o 00:02:56.247 CXX test/cpp_headers/blob.o 00:02:56.247 CXX test/cpp_headers/crc32.o 00:02:56.247 CXX test/cpp_headers/crc16.o 00:02:56.247 CXX test/cpp_headers/crc64.o 00:02:56.247 CXX test/cpp_headers/dif.o 00:02:56.247 CXX test/cpp_headers/endian.o 00:02:56.247 CXX test/cpp_headers/dma.o 00:02:56.247 CXX test/cpp_headers/env_dpdk.o 00:02:56.247 CXX test/cpp_headers/env.o 00:02:56.247 CXX test/cpp_headers/event.o 00:02:56.247 CXX test/cpp_headers/fd_group.o 00:02:56.247 CXX test/cpp_headers/fd.o 00:02:56.247 CXX test/cpp_headers/ftl.o 00:02:56.247 CXX test/cpp_headers/file.o 00:02:56.247 CXX test/cpp_headers/gpt_spec.o 00:02:56.247 CXX test/cpp_headers/histogram_data.o 00:02:56.247 CXX test/cpp_headers/hexlify.o 00:02:56.247 CXX test/cpp_headers/idxd.o 00:02:56.247 CXX test/cpp_headers/idxd_spec.o 00:02:56.247 CXX test/cpp_headers/init.o 00:02:56.247 CXX test/cpp_headers/ioat.o 00:02:56.247 CXX test/cpp_headers/json.o 00:02:56.247 CXX test/cpp_headers/ioat_spec.o 00:02:56.247 CXX test/cpp_headers/iscsi_spec.o 00:02:56.247 CXX test/cpp_headers/jsonrpc.o 00:02:56.247 CXX test/cpp_headers/keyring.o 00:02:56.247 CXX test/cpp_headers/log.o 00:02:56.247 CXX test/cpp_headers/likely.o 00:02:56.247 CXX test/cpp_headers/keyring_module.o 00:02:56.247 CXX test/cpp_headers/lvol.o 00:02:56.247 CXX test/cpp_headers/memory.o 00:02:56.247 CXX test/cpp_headers/mmio.o 00:02:56.247 CXX test/cpp_headers/nbd.o 00:02:56.247 CXX test/cpp_headers/notify.o 00:02:56.247 CXX test/cpp_headers/net.o 00:02:56.247 CXX test/cpp_headers/nvme.o 00:02:56.247 CXX test/cpp_headers/nvme_intel.o 00:02:56.247 CXX test/cpp_headers/nvme_ocssd.o 00:02:56.247 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:56.247 CXX test/cpp_headers/nvme_spec.o 00:02:56.247 CXX test/cpp_headers/nvme_zns.o 00:02:56.247 CXX test/cpp_headers/nvmf_cmd.o 00:02:56.247 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:56.247 CXX test/cpp_headers/nvmf.o 00:02:56.247 CXX test/cpp_headers/nvmf_spec.o 00:02:56.247 CXX test/cpp_headers/nvmf_transport.o 00:02:56.247 CXX test/cpp_headers/opal.o 00:02:56.247 CXX test/cpp_headers/opal_spec.o 00:02:56.247 CXX test/cpp_headers/pci_ids.o 00:02:56.247 CXX test/cpp_headers/pipe.o 00:02:56.247 CXX test/cpp_headers/reduce.o 00:02:56.247 CXX test/cpp_headers/queue.o 00:02:56.247 CXX test/cpp_headers/rpc.o 00:02:56.247 CXX test/cpp_headers/scsi.o 00:02:56.247 CXX test/cpp_headers/scheduler.o 00:02:56.247 CXX test/cpp_headers/scsi_spec.o 00:02:56.247 CXX test/cpp_headers/sock.o 00:02:56.247 CXX test/cpp_headers/stdinc.o 00:02:56.247 CXX test/cpp_headers/string.o 00:02:56.247 CXX test/cpp_headers/thread.o 00:02:56.247 CXX test/cpp_headers/trace.o 00:02:56.247 CXX test/cpp_headers/trace_parser.o 00:02:56.247 CC test/app/jsoncat/jsoncat.o 00:02:56.247 CXX test/cpp_headers/tree.o 00:02:56.247 CXX test/cpp_headers/ublk.o 00:02:56.247 CXX test/cpp_headers/util.o 00:02:56.247 CXX test/cpp_headers/uuid.o 00:02:56.247 CXX test/cpp_headers/version.o 00:02:56.247 CC examples/util/zipf/zipf.o 00:02:56.247 CC test/thread/poller_perf/poller_perf.o 00:02:56.247 CC examples/ioat/perf/perf.o 00:02:56.247 CXX test/cpp_headers/vfio_user_pci.o 00:02:56.247 CC test/app/histogram_perf/histogram_perf.o 00:02:56.247 CC examples/ioat/verify/verify.o 00:02:56.247 CC app/fio/nvme/fio_plugin.o 00:02:56.247 CC test/env/pci/pci_ut.o 00:02:56.247 CC test/env/vtophys/vtophys.o 00:02:56.247 CC test/env/memory/memory_ut.o 00:02:56.247 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:56.247 CC test/app/stub/stub.o 00:02:56.247 CC test/dma/test_dma/test_dma.o 00:02:56.541 CC app/fio/bdev/fio_plugin.o 00:02:56.541 CXX test/cpp_headers/vfio_user_spec.o 00:02:56.541 LINK spdk_lspci 00:02:56.541 CC test/app/bdev_svc/bdev_svc.o 00:02:56.541 LINK rpc_client_test 00:02:56.541 LINK spdk_trace_record 00:02:56.814 LINK interrupt_tgt 00:02:56.814 LINK spdk_nvme_discover 00:02:56.814 LINK nvmf_tgt 00:02:56.814 CC test/env/mem_callbacks/mem_callbacks.o 00:02:56.814 LINK iscsi_tgt 00:02:56.814 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:56.814 LINK jsoncat 00:02:56.814 LINK histogram_perf 00:02:57.074 CXX test/cpp_headers/vhost.o 00:02:57.074 CXX test/cpp_headers/vmd.o 00:02:57.074 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:57.074 CXX test/cpp_headers/xor.o 00:02:57.074 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:57.074 CXX test/cpp_headers/zipf.o 00:02:57.074 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:57.074 LINK zipf 00:02:57.074 LINK vtophys 00:02:57.074 LINK spdk_tgt 00:02:57.074 LINK poller_perf 00:02:57.074 LINK env_dpdk_post_init 00:02:57.074 LINK stub 00:02:57.074 LINK bdev_svc 00:02:57.074 LINK ioat_perf 00:02:57.074 LINK verify 00:02:57.074 LINK spdk_dd 00:02:57.074 LINK spdk_trace 00:02:57.074 LINK pci_ut 00:02:57.334 LINK test_dma 00:02:57.334 LINK spdk_nvme 00:02:57.334 LINK nvme_fuzz 00:02:57.334 LINK vhost_fuzz 00:02:57.334 LINK spdk_bdev 00:02:57.334 LINK spdk_top 00:02:57.334 LINK spdk_nvme_perf 00:02:57.593 CC test/event/reactor/reactor.o 00:02:57.593 CC test/event/app_repeat/app_repeat.o 00:02:57.593 CC test/event/event_perf/event_perf.o 00:02:57.593 LINK mem_callbacks 00:02:57.593 LINK spdk_nvme_identify 00:02:57.593 CC examples/sock/hello_world/hello_sock.o 00:02:57.593 CC examples/idxd/perf/perf.o 00:02:57.593 CC examples/vmd/lsvmd/lsvmd.o 00:02:57.593 CC test/event/reactor_perf/reactor_perf.o 00:02:57.593 CC examples/thread/thread/thread_ex.o 00:02:57.593 CC examples/vmd/led/led.o 00:02:57.593 CC test/event/scheduler/scheduler.o 00:02:57.593 CC app/vhost/vhost.o 00:02:57.593 LINK lsvmd 00:02:57.593 LINK reactor 00:02:57.593 LINK app_repeat 00:02:57.593 LINK event_perf 00:02:57.593 LINK reactor_perf 00:02:57.593 LINK led 00:02:57.852 LINK scheduler 00:02:57.852 LINK hello_sock 00:02:57.852 CC test/nvme/reserve/reserve.o 00:02:57.852 CC test/nvme/cuse/cuse.o 00:02:57.852 CC test/nvme/reset/reset.o 00:02:57.852 CC test/nvme/connect_stress/connect_stress.o 00:02:57.852 CC test/nvme/fused_ordering/fused_ordering.o 00:02:57.852 CC test/nvme/startup/startup.o 00:02:57.852 LINK memory_ut 00:02:57.852 CC test/nvme/overhead/overhead.o 00:02:57.852 CC test/nvme/compliance/nvme_compliance.o 00:02:57.852 CC test/nvme/sgl/sgl.o 00:02:57.852 LINK thread 00:02:57.852 CC test/nvme/err_injection/err_injection.o 00:02:57.852 CC test/nvme/fdp/fdp.o 00:02:57.852 CC test/nvme/aer/aer.o 00:02:57.852 CC test/nvme/e2edp/nvme_dp.o 00:02:57.852 CC test/nvme/simple_copy/simple_copy.o 00:02:57.852 CC test/blobfs/mkfs/mkfs.o 00:02:57.852 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:57.852 CC test/nvme/boot_partition/boot_partition.o 00:02:57.852 CC test/accel/dif/dif.o 00:02:57.852 LINK vhost 00:02:57.852 LINK idxd_perf 00:02:57.852 CC test/lvol/esnap/esnap.o 00:02:57.852 LINK connect_stress 00:02:57.852 LINK err_injection 00:02:57.852 LINK fused_ordering 00:02:57.852 LINK reserve 00:02:57.852 LINK startup 00:02:57.852 LINK boot_partition 00:02:57.852 LINK doorbell_aers 00:02:57.852 LINK mkfs 00:02:58.144 LINK simple_copy 00:02:58.144 LINK reset 00:02:58.144 LINK sgl 00:02:58.144 LINK aer 00:02:58.144 LINK nvme_dp 00:02:58.144 LINK overhead 00:02:58.144 LINK nvme_compliance 00:02:58.144 LINK fdp 00:02:58.144 LINK dif 00:02:58.144 CC examples/nvme/reconnect/reconnect.o 00:02:58.145 CC examples/nvme/hello_world/hello_world.o 00:02:58.145 CC examples/nvme/abort/abort.o 00:02:58.145 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:58.145 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:58.145 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:58.145 CC examples/nvme/arbitration/arbitration.o 00:02:58.145 CC examples/nvme/hotplug/hotplug.o 00:02:58.145 LINK iscsi_fuzz 00:02:58.404 CC examples/accel/perf/accel_perf.o 00:02:58.404 CC examples/blob/hello_world/hello_blob.o 00:02:58.404 CC examples/blob/cli/blobcli.o 00:02:58.404 LINK pmr_persistence 00:02:58.404 LINK hello_world 00:02:58.404 LINK cmb_copy 00:02:58.404 LINK hotplug 00:02:58.404 LINK reconnect 00:02:58.404 LINK arbitration 00:02:58.404 LINK abort 00:02:58.663 LINK hello_blob 00:02:58.663 LINK nvme_manage 00:02:58.663 LINK accel_perf 00:02:58.663 CC test/bdev/bdevio/bdevio.o 00:02:58.663 LINK cuse 00:02:58.663 LINK blobcli 00:02:58.922 LINK bdevio 00:02:59.181 CC examples/bdev/hello_world/hello_bdev.o 00:02:59.181 CC examples/bdev/bdevperf/bdevperf.o 00:02:59.441 LINK hello_bdev 00:02:59.701 LINK bdevperf 00:03:00.267 CC examples/nvmf/nvmf/nvmf.o 00:03:00.526 LINK nvmf 00:03:01.462 LINK esnap 00:03:01.722 00:03:01.722 real 0m49.681s 00:03:01.722 user 6m25.413s 00:03:01.722 sys 4m12.076s 00:03:01.722 10:18:05 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:01.722 10:18:05 make -- common/autotest_common.sh@10 -- $ set +x 00:03:01.722 ************************************ 00:03:01.722 END TEST make 00:03:01.722 ************************************ 00:03:01.722 10:18:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:01.722 10:18:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:01.722 10:18:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:01.722 10:18:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.722 10:18:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:01.722 10:18:05 -- pm/common@44 -- $ pid=3591244 00:03:01.722 10:18:05 -- pm/common@50 -- $ kill -TERM 3591244 00:03:01.722 10:18:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.722 10:18:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:01.722 10:18:05 -- pm/common@44 -- $ pid=3591246 00:03:01.722 10:18:05 -- pm/common@50 -- $ kill -TERM 3591246 00:03:01.722 10:18:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.722 10:18:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:01.722 10:18:05 -- pm/common@44 -- $ pid=3591248 00:03:01.722 10:18:05 -- pm/common@50 -- $ kill -TERM 3591248 00:03:01.722 10:18:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.722 10:18:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:01.722 10:18:05 -- pm/common@44 -- $ pid=3591275 00:03:01.722 10:18:05 -- pm/common@50 -- $ sudo -E kill -TERM 3591275 00:03:01.722 10:18:05 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:01.722 10:18:05 -- nvmf/common.sh@7 -- # uname -s 00:03:01.722 10:18:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:01.722 10:18:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:01.722 10:18:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:01.722 10:18:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:01.722 10:18:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:01.722 10:18:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:01.722 10:18:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:01.722 10:18:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:01.722 10:18:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:01.722 10:18:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:01.722 10:18:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:03:01.722 10:18:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:03:01.722 10:18:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:01.722 10:18:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:01.722 10:18:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:01.722 10:18:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:01.722 10:18:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:01.722 10:18:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:01.722 10:18:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:01.722 10:18:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:01.722 10:18:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.722 10:18:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.722 10:18:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.722 10:18:05 -- paths/export.sh@5 -- # export PATH 00:03:01.722 10:18:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.722 10:18:05 -- nvmf/common.sh@47 -- # : 0 00:03:01.722 10:18:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:01.722 10:18:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:01.722 10:18:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:01.722 10:18:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:01.722 10:18:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:01.722 10:18:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:01.722 10:18:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:01.722 10:18:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:01.722 10:18:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:01.722 10:18:05 -- spdk/autotest.sh@32 -- # uname -s 00:03:01.722 10:18:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:01.722 10:18:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:01.722 10:18:05 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:01.722 10:18:05 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:01.722 10:18:05 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:01.722 10:18:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:01.722 10:18:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:01.722 10:18:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:01.722 10:18:05 -- spdk/autotest.sh@48 -- # udevadm_pid=3652707 00:03:01.722 10:18:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:01.723 10:18:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:01.723 10:18:05 -- pm/common@17 -- # local monitor 00:03:01.723 10:18:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.723 10:18:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.723 10:18:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.723 10:18:05 -- pm/common@21 -- # date +%s 00:03:01.723 10:18:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.723 10:18:05 -- pm/common@21 -- # date +%s 00:03:01.723 10:18:05 -- pm/common@25 -- # sleep 1 00:03:01.723 10:18:05 -- pm/common@21 -- # date +%s 00:03:01.723 10:18:05 -- pm/common@21 -- # date +%s 00:03:01.723 10:18:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721895485 00:03:01.723 10:18:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721895485 00:03:01.723 10:18:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721895485 00:03:01.723 10:18:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721895485 00:03:01.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721895485_collect-vmstat.pm.log 00:03:01.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721895485_collect-cpu-load.pm.log 00:03:01.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721895485_collect-cpu-temp.pm.log 00:03:01.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721895485_collect-bmc-pm.bmc.pm.log 00:03:02.920 10:18:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:02.920 10:18:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:02.920 10:18:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:02.920 10:18:06 -- common/autotest_common.sh@10 -- # set +x 00:03:02.920 10:18:06 -- spdk/autotest.sh@59 -- # create_test_list 00:03:02.920 10:18:06 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:02.920 10:18:06 -- common/autotest_common.sh@10 -- # set +x 00:03:02.920 10:18:06 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:02.920 10:18:06 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:02.920 10:18:06 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:02.920 10:18:06 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:02.920 10:18:06 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:02.920 10:18:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:02.920 10:18:06 -- common/autotest_common.sh@1455 -- # uname 00:03:02.920 10:18:06 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:02.920 10:18:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:02.920 10:18:06 -- common/autotest_common.sh@1475 -- # uname 00:03:02.920 10:18:06 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:02.920 10:18:06 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:02.920 10:18:06 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:02.920 10:18:06 -- spdk/autotest.sh@72 -- # hash lcov 00:03:02.920 10:18:06 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:02.920 10:18:06 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:02.920 --rc lcov_branch_coverage=1 00:03:02.920 --rc lcov_function_coverage=1 00:03:02.920 --rc genhtml_branch_coverage=1 00:03:02.920 --rc genhtml_function_coverage=1 00:03:02.920 --rc genhtml_legend=1 00:03:02.920 --rc geninfo_all_blocks=1 00:03:02.920 ' 00:03:02.920 10:18:06 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:02.920 --rc lcov_branch_coverage=1 00:03:02.920 --rc lcov_function_coverage=1 00:03:02.920 --rc genhtml_branch_coverage=1 00:03:02.920 --rc genhtml_function_coverage=1 00:03:02.920 --rc genhtml_legend=1 00:03:02.920 --rc geninfo_all_blocks=1 00:03:02.920 ' 00:03:02.920 10:18:06 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:02.920 --rc lcov_branch_coverage=1 00:03:02.920 --rc lcov_function_coverage=1 00:03:02.920 --rc genhtml_branch_coverage=1 00:03:02.920 --rc genhtml_function_coverage=1 00:03:02.920 --rc genhtml_legend=1 00:03:02.920 --rc geninfo_all_blocks=1 00:03:02.920 --no-external' 00:03:02.920 10:18:06 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:02.920 --rc lcov_branch_coverage=1 00:03:02.920 --rc lcov_function_coverage=1 00:03:02.920 --rc genhtml_branch_coverage=1 00:03:02.920 --rc genhtml_function_coverage=1 00:03:02.920 --rc genhtml_legend=1 00:03:02.920 --rc geninfo_all_blocks=1 00:03:02.920 --no-external' 00:03:02.920 10:18:06 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:02.920 lcov: LCOV version 1.14 00:03:02.920 10:18:06 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:04.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:04.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:04.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:04.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:04.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:04.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:04.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:04.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:04.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:04.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:04.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:04.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:04.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:04.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:04.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:04.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:04.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:04.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:04.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:04.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:04.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:04.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:04.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:04.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:04.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:04.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:04.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:04.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:04.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:04.822 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:05.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:05.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:05.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:05.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:05.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:05.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:05.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:05.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:05.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:05.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:05.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:05.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:05.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:05.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:05.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:05.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:05.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:05.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:05.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:05.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:17.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:17.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:29.611 10:18:31 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:29.611 10:18:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:29.611 10:18:31 -- common/autotest_common.sh@10 -- # set +x 00:03:29.611 10:18:31 -- spdk/autotest.sh@91 -- # rm -f 00:03:29.611 10:18:31 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.990 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:30.990 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:30.990 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:30.990 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:30.990 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:30.990 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:30.990 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:30.990 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:30.990 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:31.249 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:31.249 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:31.249 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:31.249 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:31.249 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:31.249 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:31.249 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:31.249 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:31.250 10:18:34 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:31.250 10:18:34 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:31.250 10:18:34 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:31.250 10:18:34 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:31.250 10:18:34 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:31.250 10:18:34 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:31.250 10:18:34 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:31.250 10:18:34 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:31.250 10:18:34 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:31.250 10:18:34 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:31.250 10:18:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:31.250 10:18:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:31.250 10:18:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:31.250 10:18:34 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:31.250 10:18:34 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:31.508 No valid GPT data, bailing 00:03:31.508 10:18:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:31.508 10:18:34 -- scripts/common.sh@391 -- # pt= 00:03:31.508 10:18:34 -- scripts/common.sh@392 -- # return 1 00:03:31.508 10:18:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:31.508 1+0 records in 00:03:31.508 1+0 records out 00:03:31.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650596 s, 161 MB/s 00:03:31.508 10:18:34 -- spdk/autotest.sh@118 -- # sync 00:03:31.508 10:18:34 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:31.508 10:18:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:31.508 10:18:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:39.628 10:18:42 -- spdk/autotest.sh@124 -- # uname -s 00:03:39.628 10:18:42 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:39.628 10:18:42 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:39.628 10:18:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.628 10:18:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.628 10:18:42 -- common/autotest_common.sh@10 -- # set +x 00:03:39.628 ************************************ 00:03:39.628 START TEST setup.sh 00:03:39.628 ************************************ 00:03:39.629 10:18:42 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:39.629 * Looking for test storage... 00:03:39.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:39.629 10:18:42 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:39.629 10:18:42 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:39.629 10:18:42 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:39.629 10:18:42 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.629 10:18:42 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.629 10:18:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:39.629 ************************************ 00:03:39.629 START TEST acl 00:03:39.629 ************************************ 00:03:39.629 10:18:42 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:39.629 * Looking for test storage... 00:03:39.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:39.629 10:18:42 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:39.629 10:18:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:39.629 10:18:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:39.629 10:18:42 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:39.629 10:18:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.629 10:18:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:39.629 10:18:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:39.629 10:18:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.629 10:18:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.629 10:18:42 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:39.629 10:18:42 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:39.629 10:18:42 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:39.629 10:18:42 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:39.629 10:18:42 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:39.629 10:18:42 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.629 10:18:42 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.177 10:18:45 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:42.177 10:18:45 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:42.177 10:18:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.177 10:18:45 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:42.177 10:18:45 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.177 10:18:45 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:45.465 Hugepages 00:03:45.465 node hugesize free / total 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.465 00:03:45.465 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:45.465 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:45.466 10:18:48 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:45.466 10:18:48 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.466 10:18:48 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.466 10:18:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:45.466 ************************************ 00:03:45.466 START TEST denied 00:03:45.466 ************************************ 00:03:45.466 10:18:48 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:45.466 10:18:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:45.466 10:18:48 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:45.466 10:18:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:45.466 10:18:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.466 10:18:48 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.757 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:48.757 10:18:52 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:48.757 10:18:52 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:48.757 10:18:52 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:48.757 10:18:52 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:48.757 10:18:52 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:48.757 10:18:52 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:48.757 10:18:52 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:48.757 10:18:52 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:48.757 10:18:52 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.757 10:18:52 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.028 00:03:54.028 real 0m7.937s 00:03:54.028 user 0m2.489s 00:03:54.028 sys 0m4.811s 00:03:54.028 10:18:56 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.028 10:18:56 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:54.028 ************************************ 00:03:54.028 END TEST denied 00:03:54.028 ************************************ 00:03:54.028 10:18:56 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:54.028 10:18:56 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.028 10:18:56 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.028 10:18:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:54.028 ************************************ 00:03:54.028 START TEST allowed 00:03:54.028 ************************************ 00:03:54.028 10:18:56 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:54.028 10:18:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:54.028 10:18:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:54.028 10:18:56 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:54.028 10:18:56 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.028 10:18:56 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:58.259 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:58.259 10:19:01 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:58.259 10:19:01 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:58.259 10:19:01 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:58.259 10:19:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.259 10:19:01 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.546 00:04:01.546 real 0m8.228s 00:04:01.546 user 0m2.185s 00:04:01.546 sys 0m4.453s 00:04:01.546 10:19:05 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.546 10:19:05 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:01.546 ************************************ 00:04:01.546 END TEST allowed 00:04:01.546 ************************************ 00:04:01.546 00:04:01.546 real 0m22.839s 00:04:01.546 user 0m6.851s 00:04:01.546 sys 0m13.836s 00:04:01.546 10:19:05 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.546 10:19:05 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:01.546 ************************************ 00:04:01.546 END TEST acl 00:04:01.546 ************************************ 00:04:01.546 10:19:05 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:01.546 10:19:05 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.546 10:19:05 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.546 10:19:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:01.806 ************************************ 00:04:01.806 START TEST hugepages 00:04:01.806 ************************************ 00:04:01.806 10:19:05 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:01.806 * Looking for test storage... 00:04:01.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:01.806 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:01.806 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:01.806 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:01.806 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:01.806 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:01.806 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:01.806 10:19:05 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:01.806 10:19:05 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:01.806 10:19:05 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:01.806 10:19:05 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:01.806 10:19:05 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.806 10:19:05 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.806 10:19:05 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.806 10:19:05 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 41694692 kB' 'MemAvailable: 45603400 kB' 'Buffers: 3736 kB' 'Cached: 10358432 kB' 'SwapCached: 0 kB' 'Active: 7217336 kB' 'Inactive: 3677348 kB' 'Active(anon): 6827472 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535876 kB' 'Mapped: 216568 kB' 'Shmem: 6294956 kB' 'KReclaimable: 488988 kB' 'Slab: 1120552 kB' 'SReclaimable: 488988 kB' 'SUnreclaim: 631564 kB' 'KernelStack: 22128 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 8251184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216596 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:01.808 10:19:05 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:01.808 10:19:05 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.808 10:19:05 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.808 10:19:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.808 ************************************ 00:04:01.809 START TEST default_setup 00:04:01.809 ************************************ 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.809 10:19:05 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.096 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:05.096 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:05.096 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:05.096 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:05.096 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:05.096 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:05.096 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:05.096 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:05.096 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:05.096 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:05.096 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:05.096 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:05.096 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:05.096 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:05.096 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:05.096 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:06.474 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43841872 kB' 'MemAvailable: 47750580 kB' 'Buffers: 3736 kB' 'Cached: 10358564 kB' 'SwapCached: 0 kB' 'Active: 7237456 kB' 'Inactive: 3677348 kB' 'Active(anon): 6847592 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555584 kB' 'Mapped: 216864 kB' 'Shmem: 6295088 kB' 'KReclaimable: 488988 kB' 'Slab: 1118608 kB' 'SReclaimable: 488988 kB' 'SUnreclaim: 629620 kB' 'KernelStack: 22368 kB' 'PageTables: 9192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8268364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216612 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.740 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43844920 kB' 'MemAvailable: 47753628 kB' 'Buffers: 3736 kB' 'Cached: 10358568 kB' 'SwapCached: 0 kB' 'Active: 7236336 kB' 'Inactive: 3677348 kB' 'Active(anon): 6846472 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554912 kB' 'Mapped: 216748 kB' 'Shmem: 6295092 kB' 'KReclaimable: 488988 kB' 'Slab: 1118472 kB' 'SReclaimable: 488988 kB' 'SUnreclaim: 629484 kB' 'KernelStack: 22272 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8268544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216612 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.741 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43844396 kB' 'MemAvailable: 47753104 kB' 'Buffers: 3736 kB' 'Cached: 10358584 kB' 'SwapCached: 0 kB' 'Active: 7236808 kB' 'Inactive: 3677348 kB' 'Active(anon): 6846944 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555332 kB' 'Mapped: 216748 kB' 'Shmem: 6295108 kB' 'KReclaimable: 488988 kB' 'Slab: 1118440 kB' 'SReclaimable: 488988 kB' 'SUnreclaim: 629452 kB' 'KernelStack: 22304 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8268384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216644 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.746 nr_hugepages=1024 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.746 resv_hugepages=0 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.746 surplus_hugepages=0 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.746 anon_hugepages=0 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43844256 kB' 'MemAvailable: 47752964 kB' 'Buffers: 3736 kB' 'Cached: 10358604 kB' 'SwapCached: 0 kB' 'Active: 7237192 kB' 'Inactive: 3677348 kB' 'Active(anon): 6847328 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555700 kB' 'Mapped: 216748 kB' 'Shmem: 6295128 kB' 'KReclaimable: 488988 kB' 'Slab: 1118440 kB' 'SReclaimable: 488988 kB' 'SUnreclaim: 629452 kB' 'KernelStack: 22384 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8267116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216596 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27899816 kB' 'MemUsed: 4692268 kB' 'SwapCached: 0 kB' 'Active: 1264264 kB' 'Inactive: 274308 kB' 'Active(anon): 1104412 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1418400 kB' 'Mapped: 80140 kB' 'AnonPages: 123524 kB' 'Shmem: 984240 kB' 'KernelStack: 12888 kB' 'PageTables: 3112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154560 kB' 'Slab: 417676 kB' 'SReclaimable: 154560 kB' 'SUnreclaim: 263116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.749 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.750 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.750 node0=1024 expecting 1024 00:04:06.750 10:19:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.750 00:04:06.750 real 0m4.924s 00:04:06.750 user 0m1.233s 00:04:06.750 sys 0m2.163s 00:04:06.750 10:19:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.750 10:19:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:06.750 ************************************ 00:04:06.750 END TEST default_setup 00:04:06.750 ************************************ 00:04:06.750 10:19:10 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:06.750 10:19:10 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.750 10:19:10 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.750 10:19:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.750 ************************************ 00:04:06.750 START TEST per_node_1G_alloc 00:04:06.750 ************************************ 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.750 10:19:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.038 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.038 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.038 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.038 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.039 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.039 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.039 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.039 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.039 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.039 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.039 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.039 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.039 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.039 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.039 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.039 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.039 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43831528 kB' 'MemAvailable: 47740204 kB' 'Buffers: 3736 kB' 'Cached: 10358712 kB' 'SwapCached: 0 kB' 'Active: 7242852 kB' 'Inactive: 3677348 kB' 'Active(anon): 6852988 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561044 kB' 'Mapped: 217264 kB' 'Shmem: 6295236 kB' 'KReclaimable: 488956 kB' 'Slab: 1118440 kB' 'SReclaimable: 488956 kB' 'SUnreclaim: 629484 kB' 'KernelStack: 22304 kB' 'PageTables: 9092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8272696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216696 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.039 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.303 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43832584 kB' 'MemAvailable: 47741260 kB' 'Buffers: 3736 kB' 'Cached: 10358716 kB' 'SwapCached: 0 kB' 'Active: 7237380 kB' 'Inactive: 3677348 kB' 'Active(anon): 6847516 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555616 kB' 'Mapped: 216748 kB' 'Shmem: 6295240 kB' 'KReclaimable: 488956 kB' 'Slab: 1118484 kB' 'SReclaimable: 488956 kB' 'SUnreclaim: 629528 kB' 'KernelStack: 22272 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8266596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216676 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.304 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.305 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43838148 kB' 'MemAvailable: 47746824 kB' 'Buffers: 3736 kB' 'Cached: 10358736 kB' 'SwapCached: 0 kB' 'Active: 7236256 kB' 'Inactive: 3677348 kB' 'Active(anon): 6846392 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554496 kB' 'Mapped: 215868 kB' 'Shmem: 6295260 kB' 'KReclaimable: 488956 kB' 'Slab: 1118468 kB' 'SReclaimable: 488956 kB' 'SUnreclaim: 629512 kB' 'KernelStack: 22272 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8259012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216644 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.306 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.307 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.308 nr_hugepages=1024 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.308 resv_hugepages=0 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.308 surplus_hugepages=0 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.308 anon_hugepages=0 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43840828 kB' 'MemAvailable: 47749408 kB' 'Buffers: 3736 kB' 'Cached: 10358756 kB' 'SwapCached: 0 kB' 'Active: 7236300 kB' 'Inactive: 3677348 kB' 'Active(anon): 6846436 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554544 kB' 'Mapped: 215620 kB' 'Shmem: 6295280 kB' 'KReclaimable: 488860 kB' 'Slab: 1118332 kB' 'SReclaimable: 488860 kB' 'SUnreclaim: 629472 kB' 'KernelStack: 22256 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8259032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216644 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 28945760 kB' 'MemUsed: 3646324 kB' 'SwapCached: 0 kB' 'Active: 1265644 kB' 'Inactive: 274308 kB' 'Active(anon): 1105792 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1418520 kB' 'Mapped: 79276 kB' 'AnonPages: 124640 kB' 'Shmem: 984360 kB' 'KernelStack: 12984 kB' 'PageTables: 3448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154560 kB' 'Slab: 417696 kB' 'SReclaimable: 154560 kB' 'SUnreclaim: 263136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.310 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.311 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 14896676 kB' 'MemUsed: 12806432 kB' 'SwapCached: 0 kB' 'Active: 5970428 kB' 'Inactive: 3403040 kB' 'Active(anon): 5740416 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3403040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8943996 kB' 'Mapped: 136344 kB' 'AnonPages: 429632 kB' 'Shmem: 5310944 kB' 'KernelStack: 9272 kB' 'PageTables: 5404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334300 kB' 'Slab: 700588 kB' 'SReclaimable: 334300 kB' 'SUnreclaim: 366288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.312 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:10.313 node0=512 expecting 512 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:10.313 node1=512 expecting 512 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:10.313 00:04:10.313 real 0m3.474s 00:04:10.313 user 0m1.246s 00:04:10.313 sys 0m2.279s 00:04:10.313 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.314 10:19:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:10.314 ************************************ 00:04:10.314 END TEST per_node_1G_alloc 00:04:10.314 ************************************ 00:04:10.314 10:19:13 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:10.314 10:19:13 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.314 10:19:13 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.314 10:19:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.314 ************************************ 00:04:10.314 START TEST even_2G_alloc 00:04:10.314 ************************************ 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.314 10:19:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.606 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.606 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.606 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43840428 kB' 'MemAvailable: 47749008 kB' 'Buffers: 3736 kB' 'Cached: 10358864 kB' 'SwapCached: 0 kB' 'Active: 7237040 kB' 'Inactive: 3677348 kB' 'Active(anon): 6847176 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554516 kB' 'Mapped: 215772 kB' 'Shmem: 6295388 kB' 'KReclaimable: 488860 kB' 'Slab: 1119196 kB' 'SReclaimable: 488860 kB' 'SUnreclaim: 630336 kB' 'KernelStack: 22384 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8260140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216884 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.607 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43841536 kB' 'MemAvailable: 47750116 kB' 'Buffers: 3736 kB' 'Cached: 10358868 kB' 'SwapCached: 0 kB' 'Active: 7237752 kB' 'Inactive: 3677348 kB' 'Active(anon): 6847888 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555340 kB' 'Mapped: 215720 kB' 'Shmem: 6295392 kB' 'KReclaimable: 488860 kB' 'Slab: 1119196 kB' 'SReclaimable: 488860 kB' 'SUnreclaim: 630336 kB' 'KernelStack: 22400 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8261776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216804 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.608 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.609 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43840752 kB' 'MemAvailable: 47749332 kB' 'Buffers: 3736 kB' 'Cached: 10358884 kB' 'SwapCached: 0 kB' 'Active: 7236176 kB' 'Inactive: 3677348 kB' 'Active(anon): 6846312 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554104 kB' 'Mapped: 215644 kB' 'Shmem: 6295408 kB' 'KReclaimable: 488860 kB' 'Slab: 1119148 kB' 'SReclaimable: 488860 kB' 'SUnreclaim: 630288 kB' 'KernelStack: 22240 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8261800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216836 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.610 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.611 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.612 nr_hugepages=1024 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.612 resv_hugepages=0 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.612 surplus_hugepages=0 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.612 anon_hugepages=0 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.612 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43842352 kB' 'MemAvailable: 47750932 kB' 'Buffers: 3736 kB' 'Cached: 10358908 kB' 'SwapCached: 0 kB' 'Active: 7235904 kB' 'Inactive: 3677348 kB' 'Active(anon): 6846040 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553292 kB' 'Mapped: 215632 kB' 'Shmem: 6295432 kB' 'KReclaimable: 488860 kB' 'Slab: 1119116 kB' 'SReclaimable: 488860 kB' 'SUnreclaim: 630256 kB' 'KernelStack: 22176 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8259220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216692 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.613 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.614 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 28952660 kB' 'MemUsed: 3639424 kB' 'SwapCached: 0 kB' 'Active: 1264656 kB' 'Inactive: 274308 kB' 'Active(anon): 1104804 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1418640 kB' 'Mapped: 79276 kB' 'AnonPages: 123440 kB' 'Shmem: 984480 kB' 'KernelStack: 12872 kB' 'PageTables: 3160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154560 kB' 'Slab: 418520 kB' 'SReclaimable: 154560 kB' 'SUnreclaim: 263960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.615 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 14890196 kB' 'MemUsed: 12812912 kB' 'SwapCached: 0 kB' 'Active: 5971388 kB' 'Inactive: 3403040 kB' 'Active(anon): 5741376 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3403040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8944036 kB' 'Mapped: 136356 kB' 'AnonPages: 430560 kB' 'Shmem: 5310984 kB' 'KernelStack: 9336 kB' 'PageTables: 5572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334300 kB' 'Slab: 700596 kB' 'SReclaimable: 334300 kB' 'SUnreclaim: 366296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.616 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:13.617 node0=512 expecting 512 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.617 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.618 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.618 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:13.618 node1=512 expecting 512 00:04:13.618 10:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:13.618 00:04:13.618 real 0m2.894s 00:04:13.618 user 0m0.962s 00:04:13.618 sys 0m1.832s 00:04:13.618 10:19:16 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.618 10:19:16 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.618 ************************************ 00:04:13.618 END TEST even_2G_alloc 00:04:13.618 ************************************ 00:04:13.618 10:19:16 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:13.618 10:19:16 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.618 10:19:16 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.618 10:19:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.618 ************************************ 00:04:13.618 START TEST odd_alloc 00:04:13.618 ************************************ 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.618 10:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.180 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43847164 kB' 'MemAvailable: 47755744 kB' 'Buffers: 3736 kB' 'Cached: 10359024 kB' 'SwapCached: 0 kB' 'Active: 7237332 kB' 'Inactive: 3677348 kB' 'Active(anon): 6847468 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554608 kB' 'Mapped: 215768 kB' 'Shmem: 6295548 kB' 'KReclaimable: 488860 kB' 'Slab: 1118416 kB' 'SReclaimable: 488860 kB' 'SUnreclaim: 629556 kB' 'KernelStack: 22320 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 8263296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216852 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43848676 kB' 'MemAvailable: 47757256 kB' 'Buffers: 3736 kB' 'Cached: 10359028 kB' 'SwapCached: 0 kB' 'Active: 7237120 kB' 'Inactive: 3677348 kB' 'Active(anon): 6847256 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554868 kB' 'Mapped: 215652 kB' 'Shmem: 6295552 kB' 'KReclaimable: 488860 kB' 'Slab: 1118448 kB' 'SReclaimable: 488860 kB' 'SUnreclaim: 629588 kB' 'KernelStack: 22144 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 8261724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216772 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.183 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.185 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43851688 kB' 'MemAvailable: 47760268 kB' 'Buffers: 3736 kB' 'Cached: 10359044 kB' 'SwapCached: 0 kB' 'Active: 7238020 kB' 'Inactive: 3677348 kB' 'Active(anon): 6848156 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555748 kB' 'Mapped: 215660 kB' 'Shmem: 6295568 kB' 'KReclaimable: 488860 kB' 'Slab: 1118676 kB' 'SReclaimable: 488860 kB' 'SUnreclaim: 629816 kB' 'KernelStack: 22288 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 8263332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216788 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.186 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.187 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:16.188 nr_hugepages=1025 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.188 resv_hugepages=0 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.188 surplus_hugepages=0 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.188 anon_hugepages=0 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.188 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43851380 kB' 'MemAvailable: 47759960 kB' 'Buffers: 3736 kB' 'Cached: 10359064 kB' 'SwapCached: 0 kB' 'Active: 7236956 kB' 'Inactive: 3677348 kB' 'Active(anon): 6847092 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554672 kB' 'Mapped: 215652 kB' 'Shmem: 6295588 kB' 'KReclaimable: 488860 kB' 'Slab: 1118676 kB' 'SReclaimable: 488860 kB' 'SUnreclaim: 629816 kB' 'KernelStack: 22240 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 8263352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216820 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.189 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.190 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 28955012 kB' 'MemUsed: 3637072 kB' 'SwapCached: 0 kB' 'Active: 1265348 kB' 'Inactive: 274308 kB' 'Active(anon): 1105496 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1418756 kB' 'Mapped: 79276 kB' 'AnonPages: 124028 kB' 'Shmem: 984596 kB' 'KernelStack: 12904 kB' 'PageTables: 3208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154560 kB' 'Slab: 417980 kB' 'SReclaimable: 154560 kB' 'SUnreclaim: 263420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.191 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.192 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 14894288 kB' 'MemUsed: 12808820 kB' 'SwapCached: 0 kB' 'Active: 5972312 kB' 'Inactive: 3403040 kB' 'Active(anon): 5742300 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3403040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8944064 kB' 'Mapped: 136376 kB' 'AnonPages: 431320 kB' 'Shmem: 5311012 kB' 'KernelStack: 9512 kB' 'PageTables: 5940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334300 kB' 'Slab: 700696 kB' 'SReclaimable: 334300 kB' 'SUnreclaim: 366396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.193 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.194 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.195 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:16.196 node0=512 expecting 513 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:16.196 node1=513 expecting 512 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:16.196 00:04:16.196 real 0m2.925s 00:04:16.196 user 0m1.019s 00:04:16.196 sys 0m1.861s 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.196 10:19:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:16.196 ************************************ 00:04:16.196 END TEST odd_alloc 00:04:16.196 ************************************ 00:04:16.455 10:19:19 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:16.455 10:19:19 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.455 10:19:19 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.455 10:19:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.455 ************************************ 00:04:16.455 START TEST custom_alloc 00:04:16.455 ************************************ 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.455 10:19:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:18.987 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.987 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:18.987 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:18.987 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:18.987 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:18.987 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.987 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.987 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:18.987 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:18.987 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:18.987 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.987 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.987 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.987 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42814768 kB' 'MemAvailable: 46723316 kB' 'Buffers: 3736 kB' 'Cached: 10359188 kB' 'SwapCached: 0 kB' 'Active: 7238632 kB' 'Inactive: 3677348 kB' 'Active(anon): 6848768 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556456 kB' 'Mapped: 215700 kB' 'Shmem: 6295712 kB' 'KReclaimable: 488828 kB' 'Slab: 1117892 kB' 'SReclaimable: 488828 kB' 'SUnreclaim: 629064 kB' 'KernelStack: 22224 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 8263980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216868 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.988 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42813300 kB' 'MemAvailable: 46721848 kB' 'Buffers: 3736 kB' 'Cached: 10359192 kB' 'SwapCached: 0 kB' 'Active: 7238784 kB' 'Inactive: 3677348 kB' 'Active(anon): 6848920 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556596 kB' 'Mapped: 215652 kB' 'Shmem: 6295716 kB' 'KReclaimable: 488828 kB' 'Slab: 1117916 kB' 'SReclaimable: 488828 kB' 'SUnreclaim: 629088 kB' 'KernelStack: 22288 kB' 'PageTables: 9240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 8264000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216868 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.989 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.990 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.254 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42814096 kB' 'MemAvailable: 46722644 kB' 'Buffers: 3736 kB' 'Cached: 10359192 kB' 'SwapCached: 0 kB' 'Active: 7238416 kB' 'Inactive: 3677348 kB' 'Active(anon): 6848552 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556296 kB' 'Mapped: 215652 kB' 'Shmem: 6295716 kB' 'KReclaimable: 488828 kB' 'Slab: 1117916 kB' 'SReclaimable: 488828 kB' 'SUnreclaim: 629088 kB' 'KernelStack: 22336 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 8264020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216836 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.255 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.256 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:19.257 nr_hugepages=1536 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.257 resv_hugepages=0 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.257 surplus_hugepages=0 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.257 anon_hugepages=0 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42814320 kB' 'MemAvailable: 46722868 kB' 'Buffers: 3736 kB' 'Cached: 10359232 kB' 'SwapCached: 0 kB' 'Active: 7238548 kB' 'Inactive: 3677348 kB' 'Active(anon): 6848684 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556380 kB' 'Mapped: 215664 kB' 'Shmem: 6295756 kB' 'KReclaimable: 488828 kB' 'Slab: 1117916 kB' 'SReclaimable: 488828 kB' 'SUnreclaim: 629088 kB' 'KernelStack: 22208 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 8262448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216740 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.257 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.258 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 28944124 kB' 'MemUsed: 3647960 kB' 'SwapCached: 0 kB' 'Active: 1266080 kB' 'Inactive: 274308 kB' 'Active(anon): 1106228 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1418880 kB' 'Mapped: 79276 kB' 'AnonPages: 124812 kB' 'Shmem: 984720 kB' 'KernelStack: 12856 kB' 'PageTables: 3072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154560 kB' 'Slab: 417508 kB' 'SReclaimable: 154560 kB' 'SUnreclaim: 262948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.259 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.260 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 13866608 kB' 'MemUsed: 13836500 kB' 'SwapCached: 0 kB' 'Active: 5972620 kB' 'Inactive: 3403040 kB' 'Active(anon): 5742608 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3403040 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8944108 kB' 'Mapped: 136388 kB' 'AnonPages: 431652 kB' 'Shmem: 5311056 kB' 'KernelStack: 9384 kB' 'PageTables: 5476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334268 kB' 'Slab: 700408 kB' 'SReclaimable: 334268 kB' 'SUnreclaim: 366140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.261 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:19.262 node0=512 expecting 512 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:19.262 node1=1024 expecting 1024 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:19.262 00:04:19.262 real 0m2.879s 00:04:19.262 user 0m0.927s 00:04:19.262 sys 0m1.775s 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.262 10:19:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:19.262 ************************************ 00:04:19.262 END TEST custom_alloc 00:04:19.262 ************************************ 00:04:19.262 10:19:22 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:19.262 10:19:22 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.262 10:19:22 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.262 10:19:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.262 ************************************ 00:04:19.262 START TEST no_shrink_alloc 00:04:19.262 ************************************ 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.262 10:19:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.562 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.562 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:22.562 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:22.562 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.562 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43843416 kB' 'MemAvailable: 47751964 kB' 'Buffers: 3736 kB' 'Cached: 10359352 kB' 'SwapCached: 0 kB' 'Active: 7239288 kB' 'Inactive: 3677348 kB' 'Active(anon): 6849424 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556900 kB' 'Mapped: 215768 kB' 'Shmem: 6295876 kB' 'KReclaimable: 488828 kB' 'Slab: 1118404 kB' 'SReclaimable: 488828 kB' 'SUnreclaim: 629576 kB' 'KernelStack: 22256 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8261960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.563 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43843896 kB' 'MemAvailable: 47752444 kB' 'Buffers: 3736 kB' 'Cached: 10359356 kB' 'SwapCached: 0 kB' 'Active: 7238948 kB' 'Inactive: 3677348 kB' 'Active(anon): 6849084 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556488 kB' 'Mapped: 215728 kB' 'Shmem: 6295880 kB' 'KReclaimable: 488828 kB' 'Slab: 1118464 kB' 'SReclaimable: 488828 kB' 'SUnreclaim: 629636 kB' 'KernelStack: 22224 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8261976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.564 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.565 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.832 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43843604 kB' 'MemAvailable: 47752152 kB' 'Buffers: 3736 kB' 'Cached: 10359372 kB' 'SwapCached: 0 kB' 'Active: 7239252 kB' 'Inactive: 3677348 kB' 'Active(anon): 6849388 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556800 kB' 'Mapped: 215728 kB' 'Shmem: 6295896 kB' 'KReclaimable: 488828 kB' 'Slab: 1118464 kB' 'SReclaimable: 488828 kB' 'SUnreclaim: 629636 kB' 'KernelStack: 22224 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8262000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.833 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.835 nr_hugepages=1024 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.835 resv_hugepages=0 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.835 surplus_hugepages=0 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.835 anon_hugepages=0 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.835 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43843148 kB' 'MemAvailable: 47751696 kB' 'Buffers: 3736 kB' 'Cached: 10359392 kB' 'SwapCached: 0 kB' 'Active: 7239320 kB' 'Inactive: 3677348 kB' 'Active(anon): 6849456 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556888 kB' 'Mapped: 215728 kB' 'Shmem: 6295916 kB' 'KReclaimable: 488828 kB' 'Slab: 1118464 kB' 'SReclaimable: 488828 kB' 'SUnreclaim: 629636 kB' 'KernelStack: 22256 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8262024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.836 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.837 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27903580 kB' 'MemUsed: 4688504 kB' 'SwapCached: 0 kB' 'Active: 1266592 kB' 'Inactive: 274308 kB' 'Active(anon): 1106740 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1419012 kB' 'Mapped: 79276 kB' 'AnonPages: 125164 kB' 'Shmem: 984852 kB' 'KernelStack: 12936 kB' 'PageTables: 3292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154560 kB' 'Slab: 417800 kB' 'SReclaimable: 154560 kB' 'SUnreclaim: 263240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.838 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:22.839 node0=1024 expecting 1024 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.839 10:19:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:26.131 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.131 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:26.131 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.131 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43858756 kB' 'MemAvailable: 47767304 kB' 'Buffers: 3736 kB' 'Cached: 10359492 kB' 'SwapCached: 0 kB' 'Active: 7239824 kB' 'Inactive: 3677348 kB' 'Active(anon): 6849960 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556732 kB' 'Mapped: 215880 kB' 'Shmem: 6296016 kB' 'KReclaimable: 488828 kB' 'Slab: 1118372 kB' 'SReclaimable: 488828 kB' 'SUnreclaim: 629544 kB' 'KernelStack: 22192 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8262784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.132 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43859556 kB' 'MemAvailable: 47768104 kB' 'Buffers: 3736 kB' 'Cached: 10359496 kB' 'SwapCached: 0 kB' 'Active: 7239904 kB' 'Inactive: 3677348 kB' 'Active(anon): 6850040 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557292 kB' 'Mapped: 215736 kB' 'Shmem: 6296020 kB' 'KReclaimable: 488828 kB' 'Slab: 1118348 kB' 'SReclaimable: 488828 kB' 'SUnreclaim: 629520 kB' 'KernelStack: 22192 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8262800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.133 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.134 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.135 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43858800 kB' 'MemAvailable: 47767348 kB' 'Buffers: 3736 kB' 'Cached: 10359516 kB' 'SwapCached: 0 kB' 'Active: 7239620 kB' 'Inactive: 3677348 kB' 'Active(anon): 6849756 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556964 kB' 'Mapped: 215736 kB' 'Shmem: 6296040 kB' 'KReclaimable: 488828 kB' 'Slab: 1118348 kB' 'SReclaimable: 488828 kB' 'SUnreclaim: 629520 kB' 'KernelStack: 22192 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8262824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.136 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.137 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.138 nr_hugepages=1024 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.138 resv_hugepages=0 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.138 surplus_hugepages=0 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.138 anon_hugepages=0 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.138 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43860032 kB' 'MemAvailable: 47768580 kB' 'Buffers: 3736 kB' 'Cached: 10359532 kB' 'SwapCached: 0 kB' 'Active: 7240496 kB' 'Inactive: 3677348 kB' 'Active(anon): 6850632 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3677348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557860 kB' 'Mapped: 216240 kB' 'Shmem: 6296056 kB' 'KReclaimable: 488828 kB' 'Slab: 1118348 kB' 'SReclaimable: 488828 kB' 'SUnreclaim: 629520 kB' 'KernelStack: 22160 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8264596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216676 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3138932 kB' 'DirectMap2M: 15421440 kB' 'DirectMap1G: 50331648 kB' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.400 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.401 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27927188 kB' 'MemUsed: 4664896 kB' 'SwapCached: 0 kB' 'Active: 1268692 kB' 'Inactive: 274308 kB' 'Active(anon): 1108840 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1419128 kB' 'Mapped: 79276 kB' 'AnonPages: 127076 kB' 'Shmem: 984968 kB' 'KernelStack: 12888 kB' 'PageTables: 3160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 154560 kB' 'Slab: 417840 kB' 'SReclaimable: 154560 kB' 'SUnreclaim: 263280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.402 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.403 node0=1024 expecting 1024 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.403 00:04:26.403 real 0m6.988s 00:04:26.403 user 0m2.544s 00:04:26.403 sys 0m4.559s 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.403 10:19:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:26.403 ************************************ 00:04:26.403 END TEST no_shrink_alloc 00:04:26.403 ************************************ 00:04:26.403 10:19:29 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:26.403 10:19:29 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:26.403 10:19:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:26.403 10:19:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.403 10:19:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.403 10:19:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.403 10:19:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.403 10:19:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:26.403 10:19:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.403 10:19:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.403 10:19:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.403 10:19:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.403 10:19:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:26.403 10:19:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:26.403 00:04:26.403 real 0m24.672s 00:04:26.403 user 0m8.147s 00:04:26.403 sys 0m14.878s 00:04:26.403 10:19:29 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.403 10:19:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.403 ************************************ 00:04:26.403 END TEST hugepages 00:04:26.403 ************************************ 00:04:26.403 10:19:29 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:26.403 10:19:29 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.403 10:19:29 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.403 10:19:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:26.403 ************************************ 00:04:26.403 START TEST driver 00:04:26.403 ************************************ 00:04:26.403 10:19:30 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:26.661 * Looking for test storage... 00:04:26.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:26.661 10:19:30 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:26.661 10:19:30 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.661 10:19:30 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.855 10:19:34 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:30.855 10:19:34 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.855 10:19:34 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.855 10:19:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:30.855 ************************************ 00:04:30.855 START TEST guess_driver 00:04:30.855 ************************************ 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:30.855 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:30.855 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:30.855 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:30.855 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:30.855 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:30.855 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:30.855 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:30.855 Looking for driver=vfio-pci 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.855 10:19:34 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:33.389 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.389 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.389 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.389 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.389 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.389 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.389 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.389 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.389 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.648 10:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.558 10:19:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.558 10:19:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.558 10:19:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.558 10:19:38 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:35.558 10:19:38 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:35.558 10:19:38 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:35.558 10:19:38 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.751 00:04:39.751 real 0m8.983s 00:04:39.751 user 0m2.153s 00:04:39.752 sys 0m4.445s 00:04:39.752 10:19:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.752 10:19:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:39.752 ************************************ 00:04:39.752 END TEST guess_driver 00:04:39.752 ************************************ 00:04:40.010 00:04:40.010 real 0m13.463s 00:04:40.010 user 0m3.272s 00:04:40.010 sys 0m6.988s 00:04:40.010 10:19:43 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.010 10:19:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:40.010 ************************************ 00:04:40.010 END TEST driver 00:04:40.010 ************************************ 00:04:40.010 10:19:43 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:40.010 10:19:43 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.011 10:19:43 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.011 10:19:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:40.011 ************************************ 00:04:40.011 START TEST devices 00:04:40.011 ************************************ 00:04:40.011 10:19:43 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:40.011 * Looking for test storage... 00:04:40.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:40.011 10:19:43 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:40.011 10:19:43 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:40.011 10:19:43 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.011 10:19:43 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:43.298 10:19:46 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:43.298 10:19:46 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:43.298 10:19:46 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:43.298 10:19:46 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:43.298 10:19:46 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:43.298 10:19:46 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:43.298 10:19:46 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:43.298 10:19:46 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:43.298 10:19:46 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:43.298 10:19:46 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:43.298 No valid GPT data, bailing 00:04:43.298 10:19:46 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:43.298 10:19:46 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:43.298 10:19:46 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:43.298 10:19:46 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:43.298 10:19:46 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:43.298 10:19:46 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:43.298 10:19:46 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:43.298 10:19:46 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.298 10:19:46 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.298 10:19:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:43.298 ************************************ 00:04:43.298 START TEST nvme_mount 00:04:43.298 ************************************ 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:43.298 10:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:44.677 Creating new GPT entries in memory. 00:04:44.677 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:44.677 other utilities. 00:04:44.677 10:19:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:44.677 10:19:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.677 10:19:47 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.677 10:19:47 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.677 10:19:47 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:45.619 Creating new GPT entries in memory. 00:04:45.619 The operation has completed successfully. 00:04:45.619 10:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.619 10:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.619 10:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3686479 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.619 10:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:48.152 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.411 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.411 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:48.411 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.412 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.412 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:48.412 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:48.412 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.412 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.412 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.412 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:48.412 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:48.412 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.412 10:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:48.670 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:48.670 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:48.670 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:48.670 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.670 10:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.204 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.463 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.463 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:51.463 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:51.463 10:19:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.463 10:19:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:54.749 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:54.749 00:04:54.749 real 0m11.469s 00:04:54.749 user 0m3.091s 00:04:54.749 sys 0m6.170s 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.749 10:19:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:54.749 ************************************ 00:04:54.749 END TEST nvme_mount 00:04:54.749 ************************************ 00:04:54.749 10:19:58 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:54.749 10:19:58 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.749 10:19:58 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.749 10:19:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:55.009 ************************************ 00:04:55.009 START TEST dm_mount 00:04:55.009 ************************************ 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:55.009 10:19:58 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:55.979 Creating new GPT entries in memory. 00:04:55.979 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:55.979 other utilities. 00:04:55.979 10:19:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:55.979 10:19:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:55.979 10:19:59 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:55.979 10:19:59 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:55.979 10:19:59 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:56.918 Creating new GPT entries in memory. 00:04:56.918 The operation has completed successfully. 00:04:56.918 10:20:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:56.918 10:20:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.918 10:20:00 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:56.918 10:20:00 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:56.918 10:20:00 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:57.856 The operation has completed successfully. 00:04:57.856 10:20:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:57.856 10:20:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.856 10:20:01 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3690740 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.116 10:20:01 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.407 10:20:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.944 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.945 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.945 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.945 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.945 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.945 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.945 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.945 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.945 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.945 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.945 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.945 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.945 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.945 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.945 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.945 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:04.204 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:04.204 00:05:04.204 real 0m9.405s 00:05:04.204 user 0m2.171s 00:05:04.204 sys 0m4.257s 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.204 10:20:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:04.204 ************************************ 00:05:04.204 END TEST dm_mount 00:05:04.204 ************************************ 00:05:04.463 10:20:07 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:04.463 10:20:07 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:04.463 10:20:07 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.463 10:20:07 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.463 10:20:07 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:04.463 10:20:07 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.463 10:20:07 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:04.723 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:04.723 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:05:04.723 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:04.723 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:04.723 10:20:08 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:04.723 10:20:08 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.723 10:20:08 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.723 10:20:08 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.723 10:20:08 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.723 10:20:08 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.723 10:20:08 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:04.723 00:05:04.723 real 0m24.676s 00:05:04.723 user 0m6.393s 00:05:04.723 sys 0m12.941s 00:05:04.723 10:20:08 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.723 10:20:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:04.723 ************************************ 00:05:04.723 END TEST devices 00:05:04.723 ************************************ 00:05:04.723 00:05:04.723 real 1m26.070s 00:05:04.723 user 0m24.815s 00:05:04.723 sys 0m48.945s 00:05:04.723 10:20:08 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.723 10:20:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:04.723 ************************************ 00:05:04.723 END TEST setup.sh 00:05:04.723 ************************************ 00:05:04.723 10:20:08 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:08.017 Hugepages 00:05:08.017 node hugesize free / total 00:05:08.017 node0 1048576kB 0 / 0 00:05:08.017 node0 2048kB 2048 / 2048 00:05:08.017 node1 1048576kB 0 / 0 00:05:08.017 node1 2048kB 0 / 0 00:05:08.017 00:05:08.017 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:08.017 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:08.017 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:08.017 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:08.017 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:08.017 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:08.017 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:08.017 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:08.017 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:08.018 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:08.018 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:08.018 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:08.018 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:08.018 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:08.018 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:08.018 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:08.018 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:08.018 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:08.018 10:20:11 -- spdk/autotest.sh@130 -- # uname -s 00:05:08.018 10:20:11 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:08.018 10:20:11 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:08.018 10:20:11 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:11.319 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:11.319 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:11.319 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:11.319 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:11.319 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:11.319 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:11.319 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:11.319 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:11.319 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:11.319 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:11.319 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:11.319 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:11.319 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:11.319 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:11.319 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:11.319 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:12.696 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:12.696 10:20:16 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:13.633 10:20:17 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:13.633 10:20:17 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:13.633 10:20:17 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:13.633 10:20:17 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:13.633 10:20:17 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:13.633 10:20:17 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:13.633 10:20:17 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.633 10:20:17 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:13.633 10:20:17 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:13.633 10:20:17 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:13.633 10:20:17 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:13.633 10:20:17 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:16.980 Waiting for block devices as requested 00:05:16.980 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:16.980 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:16.980 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:16.980 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:16.980 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:17.239 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:17.239 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:17.239 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:17.497 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:17.497 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:17.497 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:17.756 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:17.756 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:17.756 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:18.015 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:18.015 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:18.015 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:18.274 10:20:21 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:18.274 10:20:21 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:18.274 10:20:21 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:18.274 10:20:21 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:05:18.274 10:20:21 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:18.274 10:20:21 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:18.274 10:20:21 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:18.274 10:20:21 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:18.274 10:20:21 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:18.274 10:20:21 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:18.274 10:20:21 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:18.274 10:20:21 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:18.274 10:20:21 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:18.274 10:20:21 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:18.274 10:20:21 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:18.274 10:20:21 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:18.274 10:20:21 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:18.274 10:20:21 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:18.274 10:20:21 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:18.274 10:20:21 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:18.274 10:20:21 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:18.274 10:20:21 -- common/autotest_common.sh@1557 -- # continue 00:05:18.274 10:20:21 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:18.274 10:20:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.274 10:20:21 -- common/autotest_common.sh@10 -- # set +x 00:05:18.274 10:20:21 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:18.274 10:20:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.274 10:20:21 -- common/autotest_common.sh@10 -- # set +x 00:05:18.274 10:20:21 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:21.560 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:21.560 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:21.560 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:21.560 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:21.560 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:21.560 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:21.560 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:21.560 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:21.560 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:21.560 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:21.560 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:21.819 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:21.819 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:21.820 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:21.820 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:21.820 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:23.197 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:23.456 10:20:26 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:23.456 10:20:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:23.456 10:20:26 -- common/autotest_common.sh@10 -- # set +x 00:05:23.456 10:20:27 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:23.456 10:20:27 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:23.456 10:20:27 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:23.456 10:20:27 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:23.456 10:20:27 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:23.456 10:20:27 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:23.456 10:20:27 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:23.456 10:20:27 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:23.456 10:20:27 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.456 10:20:27 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:23.456 10:20:27 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:23.716 10:20:27 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:23.716 10:20:27 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:23.716 10:20:27 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:23.716 10:20:27 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:23.716 10:20:27 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:23.716 10:20:27 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:23.716 10:20:27 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:23.716 10:20:27 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:05:23.716 10:20:27 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:05:23.716 10:20:27 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3699975 00:05:23.716 10:20:27 -- common/autotest_common.sh@1598 -- # waitforlisten 3699975 00:05:23.716 10:20:27 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.716 10:20:27 -- common/autotest_common.sh@831 -- # '[' -z 3699975 ']' 00:05:23.716 10:20:27 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.716 10:20:27 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.716 10:20:27 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.716 10:20:27 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.716 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:05:23.716 [2024-07-25 10:20:27.230304] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:23.716 [2024-07-25 10:20:27.230354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3699975 ] 00:05:23.716 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.716 [2024-07-25 10:20:27.299146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.716 [2024-07-25 10:20:27.368369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.650 10:20:28 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.650 10:20:28 -- common/autotest_common.sh@864 -- # return 0 00:05:24.650 10:20:28 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:24.650 10:20:28 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:24.650 10:20:28 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:27.937 nvme0n1 00:05:27.937 10:20:31 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:27.938 [2024-07-25 10:20:31.173375] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:27.938 request: 00:05:27.938 { 00:05:27.938 "nvme_ctrlr_name": "nvme0", 00:05:27.938 "password": "test", 00:05:27.938 "method": "bdev_nvme_opal_revert", 00:05:27.938 "req_id": 1 00:05:27.938 } 00:05:27.938 Got JSON-RPC error response 00:05:27.938 response: 00:05:27.938 { 00:05:27.938 "code": -32602, 00:05:27.938 "message": "Invalid parameters" 00:05:27.938 } 00:05:27.938 10:20:31 -- common/autotest_common.sh@1604 -- # true 00:05:27.938 10:20:31 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:27.938 10:20:31 -- common/autotest_common.sh@1608 -- # killprocess 3699975 00:05:27.938 10:20:31 -- common/autotest_common.sh@950 -- # '[' -z 3699975 ']' 00:05:27.938 10:20:31 -- common/autotest_common.sh@954 -- # kill -0 3699975 00:05:27.938 10:20:31 -- common/autotest_common.sh@955 -- # uname 00:05:27.938 10:20:31 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.938 10:20:31 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3699975 00:05:27.938 10:20:31 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.938 10:20:31 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.938 10:20:31 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3699975' 00:05:27.938 killing process with pid 3699975 00:05:27.938 10:20:31 -- common/autotest_common.sh@969 -- # kill 3699975 00:05:27.938 10:20:31 -- common/autotest_common.sh@974 -- # wait 3699975 00:05:29.896 10:20:33 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:29.896 10:20:33 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:29.896 10:20:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:29.896 10:20:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:29.896 10:20:33 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:29.896 10:20:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.896 10:20:33 -- common/autotest_common.sh@10 -- # set +x 00:05:29.896 10:20:33 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:29.896 10:20:33 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:29.896 10:20:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.896 10:20:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.896 10:20:33 -- common/autotest_common.sh@10 -- # set +x 00:05:29.896 ************************************ 00:05:29.896 START TEST env 00:05:29.896 ************************************ 00:05:29.896 10:20:33 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:29.896 * Looking for test storage... 00:05:29.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:29.896 10:20:33 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:29.896 10:20:33 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.896 10:20:33 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.896 10:20:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.896 ************************************ 00:05:29.896 START TEST env_memory 00:05:29.896 ************************************ 00:05:29.896 10:20:33 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:29.896 00:05:29.896 00:05:29.896 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.896 http://cunit.sourceforge.net/ 00:05:29.896 00:05:29.896 00:05:29.896 Suite: memory 00:05:29.896 Test: alloc and free memory map ...[2024-07-25 10:20:33.595651] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:30.156 passed 00:05:30.156 Test: mem map translation ...[2024-07-25 10:20:33.614813] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:30.156 [2024-07-25 10:20:33.614835] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:30.156 [2024-07-25 10:20:33.614871] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:30.156 [2024-07-25 10:20:33.614881] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:30.156 passed 00:05:30.156 Test: mem map registration ...[2024-07-25 10:20:33.650561] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:30.157 [2024-07-25 10:20:33.650578] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:30.157 passed 00:05:30.157 Test: mem map adjacent registrations ...passed 00:05:30.157 00:05:30.157 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.157 suites 1 1 n/a 0 0 00:05:30.157 tests 4 4 4 0 0 00:05:30.157 asserts 152 152 152 0 n/a 00:05:30.157 00:05:30.157 Elapsed time = 0.135 seconds 00:05:30.157 00:05:30.157 real 0m0.150s 00:05:30.157 user 0m0.133s 00:05:30.157 sys 0m0.016s 00:05:30.157 10:20:33 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.157 10:20:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:30.157 ************************************ 00:05:30.157 END TEST env_memory 00:05:30.157 ************************************ 00:05:30.157 10:20:33 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:30.157 10:20:33 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.157 10:20:33 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.157 10:20:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.157 ************************************ 00:05:30.157 START TEST env_vtophys 00:05:30.157 ************************************ 00:05:30.157 10:20:33 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:30.157 EAL: lib.eal log level changed from notice to debug 00:05:30.157 EAL: Detected lcore 0 as core 0 on socket 0 00:05:30.157 EAL: Detected lcore 1 as core 1 on socket 0 00:05:30.157 EAL: Detected lcore 2 as core 2 on socket 0 00:05:30.157 EAL: Detected lcore 3 as core 3 on socket 0 00:05:30.157 EAL: Detected lcore 4 as core 4 on socket 0 00:05:30.157 EAL: Detected lcore 5 as core 5 on socket 0 00:05:30.157 EAL: Detected lcore 6 as core 6 on socket 0 00:05:30.157 EAL: Detected lcore 7 as core 8 on socket 0 00:05:30.157 EAL: Detected lcore 8 as core 9 on socket 0 00:05:30.157 EAL: Detected lcore 9 as core 10 on socket 0 00:05:30.157 EAL: Detected lcore 10 as core 11 on socket 0 00:05:30.157 EAL: Detected lcore 11 as core 12 on socket 0 00:05:30.157 EAL: Detected lcore 12 as core 13 on socket 0 00:05:30.157 EAL: Detected lcore 13 as core 14 on socket 0 00:05:30.157 EAL: Detected lcore 14 as core 16 on socket 0 00:05:30.157 EAL: Detected lcore 15 as core 17 on socket 0 00:05:30.157 EAL: Detected lcore 16 as core 18 on socket 0 00:05:30.157 EAL: Detected lcore 17 as core 19 on socket 0 00:05:30.157 EAL: Detected lcore 18 as core 20 on socket 0 00:05:30.157 EAL: Detected lcore 19 as core 21 on socket 0 00:05:30.157 EAL: Detected lcore 20 as core 22 on socket 0 00:05:30.157 EAL: Detected lcore 21 as core 24 on socket 0 00:05:30.157 EAL: Detected lcore 22 as core 25 on socket 0 00:05:30.157 EAL: Detected lcore 23 as core 26 on socket 0 00:05:30.157 EAL: Detected lcore 24 as core 27 on socket 0 00:05:30.157 EAL: Detected lcore 25 as core 28 on socket 0 00:05:30.157 EAL: Detected lcore 26 as core 29 on socket 0 00:05:30.157 EAL: Detected lcore 27 as core 30 on socket 0 00:05:30.157 EAL: Detected lcore 28 as core 0 on socket 1 00:05:30.157 EAL: Detected lcore 29 as core 1 on socket 1 00:05:30.157 EAL: Detected lcore 30 as core 2 on socket 1 00:05:30.157 EAL: Detected lcore 31 as core 3 on socket 1 00:05:30.157 EAL: Detected lcore 32 as core 4 on socket 1 00:05:30.157 EAL: Detected lcore 33 as core 5 on socket 1 00:05:30.157 EAL: Detected lcore 34 as core 6 on socket 1 00:05:30.157 EAL: Detected lcore 35 as core 8 on socket 1 00:05:30.157 EAL: Detected lcore 36 as core 9 on socket 1 00:05:30.157 EAL: Detected lcore 37 as core 10 on socket 1 00:05:30.157 EAL: Detected lcore 38 as core 11 on socket 1 00:05:30.157 EAL: Detected lcore 39 as core 12 on socket 1 00:05:30.157 EAL: Detected lcore 40 as core 13 on socket 1 00:05:30.157 EAL: Detected lcore 41 as core 14 on socket 1 00:05:30.157 EAL: Detected lcore 42 as core 16 on socket 1 00:05:30.157 EAL: Detected lcore 43 as core 17 on socket 1 00:05:30.157 EAL: Detected lcore 44 as core 18 on socket 1 00:05:30.157 EAL: Detected lcore 45 as core 19 on socket 1 00:05:30.157 EAL: Detected lcore 46 as core 20 on socket 1 00:05:30.157 EAL: Detected lcore 47 as core 21 on socket 1 00:05:30.157 EAL: Detected lcore 48 as core 22 on socket 1 00:05:30.157 EAL: Detected lcore 49 as core 24 on socket 1 00:05:30.157 EAL: Detected lcore 50 as core 25 on socket 1 00:05:30.157 EAL: Detected lcore 51 as core 26 on socket 1 00:05:30.157 EAL: Detected lcore 52 as core 27 on socket 1 00:05:30.157 EAL: Detected lcore 53 as core 28 on socket 1 00:05:30.157 EAL: Detected lcore 54 as core 29 on socket 1 00:05:30.157 EAL: Detected lcore 55 as core 30 on socket 1 00:05:30.157 EAL: Detected lcore 56 as core 0 on socket 0 00:05:30.157 EAL: Detected lcore 57 as core 1 on socket 0 00:05:30.157 EAL: Detected lcore 58 as core 2 on socket 0 00:05:30.157 EAL: Detected lcore 59 as core 3 on socket 0 00:05:30.157 EAL: Detected lcore 60 as core 4 on socket 0 00:05:30.157 EAL: Detected lcore 61 as core 5 on socket 0 00:05:30.157 EAL: Detected lcore 62 as core 6 on socket 0 00:05:30.157 EAL: Detected lcore 63 as core 8 on socket 0 00:05:30.157 EAL: Detected lcore 64 as core 9 on socket 0 00:05:30.157 EAL: Detected lcore 65 as core 10 on socket 0 00:05:30.157 EAL: Detected lcore 66 as core 11 on socket 0 00:05:30.157 EAL: Detected lcore 67 as core 12 on socket 0 00:05:30.157 EAL: Detected lcore 68 as core 13 on socket 0 00:05:30.157 EAL: Detected lcore 69 as core 14 on socket 0 00:05:30.157 EAL: Detected lcore 70 as core 16 on socket 0 00:05:30.157 EAL: Detected lcore 71 as core 17 on socket 0 00:05:30.157 EAL: Detected lcore 72 as core 18 on socket 0 00:05:30.157 EAL: Detected lcore 73 as core 19 on socket 0 00:05:30.157 EAL: Detected lcore 74 as core 20 on socket 0 00:05:30.157 EAL: Detected lcore 75 as core 21 on socket 0 00:05:30.157 EAL: Detected lcore 76 as core 22 on socket 0 00:05:30.157 EAL: Detected lcore 77 as core 24 on socket 0 00:05:30.157 EAL: Detected lcore 78 as core 25 on socket 0 00:05:30.157 EAL: Detected lcore 79 as core 26 on socket 0 00:05:30.157 EAL: Detected lcore 80 as core 27 on socket 0 00:05:30.157 EAL: Detected lcore 81 as core 28 on socket 0 00:05:30.157 EAL: Detected lcore 82 as core 29 on socket 0 00:05:30.157 EAL: Detected lcore 83 as core 30 on socket 0 00:05:30.157 EAL: Detected lcore 84 as core 0 on socket 1 00:05:30.157 EAL: Detected lcore 85 as core 1 on socket 1 00:05:30.157 EAL: Detected lcore 86 as core 2 on socket 1 00:05:30.157 EAL: Detected lcore 87 as core 3 on socket 1 00:05:30.157 EAL: Detected lcore 88 as core 4 on socket 1 00:05:30.157 EAL: Detected lcore 89 as core 5 on socket 1 00:05:30.157 EAL: Detected lcore 90 as core 6 on socket 1 00:05:30.157 EAL: Detected lcore 91 as core 8 on socket 1 00:05:30.157 EAL: Detected lcore 92 as core 9 on socket 1 00:05:30.157 EAL: Detected lcore 93 as core 10 on socket 1 00:05:30.157 EAL: Detected lcore 94 as core 11 on socket 1 00:05:30.157 EAL: Detected lcore 95 as core 12 on socket 1 00:05:30.157 EAL: Detected lcore 96 as core 13 on socket 1 00:05:30.157 EAL: Detected lcore 97 as core 14 on socket 1 00:05:30.157 EAL: Detected lcore 98 as core 16 on socket 1 00:05:30.157 EAL: Detected lcore 99 as core 17 on socket 1 00:05:30.157 EAL: Detected lcore 100 as core 18 on socket 1 00:05:30.157 EAL: Detected lcore 101 as core 19 on socket 1 00:05:30.157 EAL: Detected lcore 102 as core 20 on socket 1 00:05:30.157 EAL: Detected lcore 103 as core 21 on socket 1 00:05:30.157 EAL: Detected lcore 104 as core 22 on socket 1 00:05:30.157 EAL: Detected lcore 105 as core 24 on socket 1 00:05:30.157 EAL: Detected lcore 106 as core 25 on socket 1 00:05:30.157 EAL: Detected lcore 107 as core 26 on socket 1 00:05:30.157 EAL: Detected lcore 108 as core 27 on socket 1 00:05:30.157 EAL: Detected lcore 109 as core 28 on socket 1 00:05:30.157 EAL: Detected lcore 110 as core 29 on socket 1 00:05:30.157 EAL: Detected lcore 111 as core 30 on socket 1 00:05:30.157 EAL: Maximum logical cores by configuration: 128 00:05:30.157 EAL: Detected CPU lcores: 112 00:05:30.157 EAL: Detected NUMA nodes: 2 00:05:30.157 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:30.157 EAL: Detected shared linkage of DPDK 00:05:30.157 EAL: No shared files mode enabled, IPC will be disabled 00:05:30.157 EAL: Bus pci wants IOVA as 'DC' 00:05:30.157 EAL: Buses did not request a specific IOVA mode. 00:05:30.157 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:30.157 EAL: Selected IOVA mode 'VA' 00:05:30.157 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.157 EAL: Probing VFIO support... 00:05:30.157 EAL: IOMMU type 1 (Type 1) is supported 00:05:30.157 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:30.157 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:30.157 EAL: VFIO support initialized 00:05:30.157 EAL: Ask a virtual area of 0x2e000 bytes 00:05:30.157 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:30.157 EAL: Setting up physically contiguous memory... 00:05:30.157 EAL: Setting maximum number of open files to 524288 00:05:30.157 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:30.158 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:30.158 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:30.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.158 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:30.158 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.158 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:30.158 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:30.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.158 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:30.158 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.158 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:30.158 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:30.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.158 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:30.158 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.158 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:30.158 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:30.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.158 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:30.158 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.158 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:30.158 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:30.158 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:30.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.158 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:30.158 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.158 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:30.158 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:30.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.158 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:30.158 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.158 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:30.158 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:30.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.158 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:30.158 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.158 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:30.158 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:30.158 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.158 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:30.158 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.158 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.158 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:30.158 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:30.158 EAL: Hugepages will be freed exactly as allocated. 00:05:30.158 EAL: No shared files mode enabled, IPC is disabled 00:05:30.158 EAL: No shared files mode enabled, IPC is disabled 00:05:30.158 EAL: TSC frequency is ~2500000 KHz 00:05:30.158 EAL: Main lcore 0 is ready (tid=7f36cd655a00;cpuset=[0]) 00:05:30.158 EAL: Trying to obtain current memory policy. 00:05:30.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.158 EAL: Restoring previous memory policy: 0 00:05:30.158 EAL: request: mp_malloc_sync 00:05:30.158 EAL: No shared files mode enabled, IPC is disabled 00:05:30.158 EAL: Heap on socket 0 was expanded by 2MB 00:05:30.158 EAL: No shared files mode enabled, IPC is disabled 00:05:30.417 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:30.417 EAL: Mem event callback 'spdk:(nil)' registered 00:05:30.417 00:05:30.417 00:05:30.417 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.417 http://cunit.sourceforge.net/ 00:05:30.417 00:05:30.417 00:05:30.417 Suite: components_suite 00:05:30.417 Test: vtophys_malloc_test ...passed 00:05:30.417 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:30.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.417 EAL: Restoring previous memory policy: 4 00:05:30.417 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.417 EAL: request: mp_malloc_sync 00:05:30.417 EAL: No shared files mode enabled, IPC is disabled 00:05:30.417 EAL: Heap on socket 0 was expanded by 4MB 00:05:30.417 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.417 EAL: request: mp_malloc_sync 00:05:30.417 EAL: No shared files mode enabled, IPC is disabled 00:05:30.417 EAL: Heap on socket 0 was shrunk by 4MB 00:05:30.417 EAL: Trying to obtain current memory policy. 00:05:30.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.417 EAL: Restoring previous memory policy: 4 00:05:30.417 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.417 EAL: request: mp_malloc_sync 00:05:30.417 EAL: No shared files mode enabled, IPC is disabled 00:05:30.417 EAL: Heap on socket 0 was expanded by 6MB 00:05:30.417 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.417 EAL: request: mp_malloc_sync 00:05:30.417 EAL: No shared files mode enabled, IPC is disabled 00:05:30.417 EAL: Heap on socket 0 was shrunk by 6MB 00:05:30.417 EAL: Trying to obtain current memory policy. 00:05:30.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.417 EAL: Restoring previous memory policy: 4 00:05:30.417 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.417 EAL: request: mp_malloc_sync 00:05:30.417 EAL: No shared files mode enabled, IPC is disabled 00:05:30.417 EAL: Heap on socket 0 was expanded by 10MB 00:05:30.417 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.417 EAL: request: mp_malloc_sync 00:05:30.417 EAL: No shared files mode enabled, IPC is disabled 00:05:30.417 EAL: Heap on socket 0 was shrunk by 10MB 00:05:30.417 EAL: Trying to obtain current memory policy. 00:05:30.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.417 EAL: Restoring previous memory policy: 4 00:05:30.417 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.417 EAL: request: mp_malloc_sync 00:05:30.417 EAL: No shared files mode enabled, IPC is disabled 00:05:30.417 EAL: Heap on socket 0 was expanded by 18MB 00:05:30.417 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.417 EAL: request: mp_malloc_sync 00:05:30.417 EAL: No shared files mode enabled, IPC is disabled 00:05:30.417 EAL: Heap on socket 0 was shrunk by 18MB 00:05:30.417 EAL: Trying to obtain current memory policy. 00:05:30.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.417 EAL: Restoring previous memory policy: 4 00:05:30.417 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.417 EAL: request: mp_malloc_sync 00:05:30.417 EAL: No shared files mode enabled, IPC is disabled 00:05:30.417 EAL: Heap on socket 0 was expanded by 34MB 00:05:30.417 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.417 EAL: request: mp_malloc_sync 00:05:30.417 EAL: No shared files mode enabled, IPC is disabled 00:05:30.417 EAL: Heap on socket 0 was shrunk by 34MB 00:05:30.417 EAL: Trying to obtain current memory policy. 00:05:30.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.417 EAL: Restoring previous memory policy: 4 00:05:30.417 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.417 EAL: request: mp_malloc_sync 00:05:30.417 EAL: No shared files mode enabled, IPC is disabled 00:05:30.417 EAL: Heap on socket 0 was expanded by 66MB 00:05:30.417 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.417 EAL: request: mp_malloc_sync 00:05:30.417 EAL: No shared files mode enabled, IPC is disabled 00:05:30.418 EAL: Heap on socket 0 was shrunk by 66MB 00:05:30.418 EAL: Trying to obtain current memory policy. 00:05:30.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.418 EAL: Restoring previous memory policy: 4 00:05:30.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.418 EAL: request: mp_malloc_sync 00:05:30.418 EAL: No shared files mode enabled, IPC is disabled 00:05:30.418 EAL: Heap on socket 0 was expanded by 130MB 00:05:30.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.418 EAL: request: mp_malloc_sync 00:05:30.418 EAL: No shared files mode enabled, IPC is disabled 00:05:30.418 EAL: Heap on socket 0 was shrunk by 130MB 00:05:30.418 EAL: Trying to obtain current memory policy. 00:05:30.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.418 EAL: Restoring previous memory policy: 4 00:05:30.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.418 EAL: request: mp_malloc_sync 00:05:30.418 EAL: No shared files mode enabled, IPC is disabled 00:05:30.418 EAL: Heap on socket 0 was expanded by 258MB 00:05:30.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.418 EAL: request: mp_malloc_sync 00:05:30.418 EAL: No shared files mode enabled, IPC is disabled 00:05:30.418 EAL: Heap on socket 0 was shrunk by 258MB 00:05:30.418 EAL: Trying to obtain current memory policy. 00:05:30.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.677 EAL: Restoring previous memory policy: 4 00:05:30.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.677 EAL: request: mp_malloc_sync 00:05:30.677 EAL: No shared files mode enabled, IPC is disabled 00:05:30.677 EAL: Heap on socket 0 was expanded by 514MB 00:05:30.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.677 EAL: request: mp_malloc_sync 00:05:30.677 EAL: No shared files mode enabled, IPC is disabled 00:05:30.677 EAL: Heap on socket 0 was shrunk by 514MB 00:05:30.677 EAL: Trying to obtain current memory policy. 00:05:30.677 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.936 EAL: Restoring previous memory policy: 4 00:05:30.936 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.936 EAL: request: mp_malloc_sync 00:05:30.936 EAL: No shared files mode enabled, IPC is disabled 00:05:30.936 EAL: Heap on socket 0 was expanded by 1026MB 00:05:31.195 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.195 EAL: request: mp_malloc_sync 00:05:31.195 EAL: No shared files mode enabled, IPC is disabled 00:05:31.195 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:31.195 passed 00:05:31.195 00:05:31.195 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.195 suites 1 1 n/a 0 0 00:05:31.195 tests 2 2 2 0 0 00:05:31.195 asserts 497 497 497 0 n/a 00:05:31.195 00:05:31.195 Elapsed time = 0.959 seconds 00:05:31.195 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.195 EAL: request: mp_malloc_sync 00:05:31.195 EAL: No shared files mode enabled, IPC is disabled 00:05:31.195 EAL: Heap on socket 0 was shrunk by 2MB 00:05:31.195 EAL: No shared files mode enabled, IPC is disabled 00:05:31.195 EAL: No shared files mode enabled, IPC is disabled 00:05:31.195 EAL: No shared files mode enabled, IPC is disabled 00:05:31.195 00:05:31.195 real 0m1.088s 00:05:31.195 user 0m0.636s 00:05:31.195 sys 0m0.426s 00:05:31.195 10:20:34 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.195 10:20:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:31.195 ************************************ 00:05:31.195 END TEST env_vtophys 00:05:31.195 ************************************ 00:05:31.454 10:20:34 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:31.455 10:20:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.455 10:20:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.455 10:20:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.455 ************************************ 00:05:31.455 START TEST env_pci 00:05:31.455 ************************************ 00:05:31.455 10:20:34 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:31.455 00:05:31.455 00:05:31.455 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.455 http://cunit.sourceforge.net/ 00:05:31.455 00:05:31.455 00:05:31.455 Suite: pci 00:05:31.455 Test: pci_hook ...[2024-07-25 10:20:34.977648] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3701416 has claimed it 00:05:31.455 EAL: Cannot find device (10000:00:01.0) 00:05:31.455 EAL: Failed to attach device on primary process 00:05:31.455 passed 00:05:31.455 00:05:31.455 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.455 suites 1 1 n/a 0 0 00:05:31.455 tests 1 1 1 0 0 00:05:31.455 asserts 25 25 25 0 n/a 00:05:31.455 00:05:31.455 Elapsed time = 0.034 seconds 00:05:31.455 00:05:31.455 real 0m0.056s 00:05:31.455 user 0m0.016s 00:05:31.455 sys 0m0.040s 00:05:31.455 10:20:35 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.455 10:20:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:31.455 ************************************ 00:05:31.455 END TEST env_pci 00:05:31.455 ************************************ 00:05:31.455 10:20:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:31.455 10:20:35 env -- env/env.sh@15 -- # uname 00:05:31.455 10:20:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:31.455 10:20:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:31.455 10:20:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.455 10:20:35 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:31.455 10:20:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.455 10:20:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.455 ************************************ 00:05:31.455 START TEST env_dpdk_post_init 00:05:31.455 ************************************ 00:05:31.455 10:20:35 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.455 EAL: Detected CPU lcores: 112 00:05:31.455 EAL: Detected NUMA nodes: 2 00:05:31.455 EAL: Detected shared linkage of DPDK 00:05:31.455 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.455 EAL: Selected IOVA mode 'VA' 00:05:31.455 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.455 EAL: VFIO support initialized 00:05:31.455 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.714 EAL: Using IOMMU type 1 (Type 1) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:31.714 EAL: Ignore mapping IO port bar(1) 00:05:31.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:32.652 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:35.970 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:35.970 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:36.538 Starting DPDK initialization... 00:05:36.538 Starting SPDK post initialization... 00:05:36.538 SPDK NVMe probe 00:05:36.538 Attaching to 0000:d8:00.0 00:05:36.538 Attached to 0000:d8:00.0 00:05:36.538 Cleaning up... 00:05:36.538 00:05:36.538 real 0m4.857s 00:05:36.538 user 0m3.615s 00:05:36.538 sys 0m0.300s 00:05:36.538 10:20:39 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.538 10:20:39 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.538 ************************************ 00:05:36.538 END TEST env_dpdk_post_init 00:05:36.538 ************************************ 00:05:36.538 10:20:40 env -- env/env.sh@26 -- # uname 00:05:36.538 10:20:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:36.538 10:20:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.538 10:20:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.538 10:20:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.538 10:20:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.538 ************************************ 00:05:36.538 START TEST env_mem_callbacks 00:05:36.538 ************************************ 00:05:36.538 10:20:40 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.538 EAL: Detected CPU lcores: 112 00:05:36.538 EAL: Detected NUMA nodes: 2 00:05:36.538 EAL: Detected shared linkage of DPDK 00:05:36.538 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.538 EAL: Selected IOVA mode 'VA' 00:05:36.538 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.538 EAL: VFIO support initialized 00:05:36.538 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.538 00:05:36.538 00:05:36.538 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.538 http://cunit.sourceforge.net/ 00:05:36.538 00:05:36.538 00:05:36.538 Suite: memory 00:05:36.538 Test: test ... 00:05:36.538 register 0x200000200000 2097152 00:05:36.538 malloc 3145728 00:05:36.538 register 0x200000400000 4194304 00:05:36.538 buf 0x200000500000 len 3145728 PASSED 00:05:36.538 malloc 64 00:05:36.538 buf 0x2000004fff40 len 64 PASSED 00:05:36.538 malloc 4194304 00:05:36.538 register 0x200000800000 6291456 00:05:36.538 buf 0x200000a00000 len 4194304 PASSED 00:05:36.538 free 0x200000500000 3145728 00:05:36.538 free 0x2000004fff40 64 00:05:36.538 unregister 0x200000400000 4194304 PASSED 00:05:36.538 free 0x200000a00000 4194304 00:05:36.538 unregister 0x200000800000 6291456 PASSED 00:05:36.538 malloc 8388608 00:05:36.538 register 0x200000400000 10485760 00:05:36.538 buf 0x200000600000 len 8388608 PASSED 00:05:36.538 free 0x200000600000 8388608 00:05:36.538 unregister 0x200000400000 10485760 PASSED 00:05:36.538 passed 00:05:36.538 00:05:36.538 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.538 suites 1 1 n/a 0 0 00:05:36.538 tests 1 1 1 0 0 00:05:36.538 asserts 15 15 15 0 n/a 00:05:36.538 00:05:36.538 Elapsed time = 0.005 seconds 00:05:36.538 00:05:36.538 real 0m0.066s 00:05:36.538 user 0m0.019s 00:05:36.538 sys 0m0.046s 00:05:36.538 10:20:40 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.538 10:20:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:36.538 ************************************ 00:05:36.538 END TEST env_mem_callbacks 00:05:36.538 ************************************ 00:05:36.538 00:05:36.538 real 0m6.754s 00:05:36.538 user 0m4.612s 00:05:36.538 sys 0m1.210s 00:05:36.538 10:20:40 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.538 10:20:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.538 ************************************ 00:05:36.538 END TEST env 00:05:36.538 ************************************ 00:05:36.538 10:20:40 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:36.538 10:20:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.538 10:20:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.538 10:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:36.798 ************************************ 00:05:36.798 START TEST rpc 00:05:36.798 ************************************ 00:05:36.798 10:20:40 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:36.798 * Looking for test storage... 00:05:36.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:36.798 10:20:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3702429 00:05:36.798 10:20:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.798 10:20:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3702429 00:05:36.798 10:20:40 rpc -- common/autotest_common.sh@831 -- # '[' -z 3702429 ']' 00:05:36.798 10:20:40 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.798 10:20:40 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.798 10:20:40 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.798 10:20:40 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.798 10:20:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.798 10:20:40 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:36.798 [2024-07-25 10:20:40.389693] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:36.798 [2024-07-25 10:20:40.389754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3702429 ] 00:05:36.798 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.798 [2024-07-25 10:20:40.459558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.058 [2024-07-25 10:20:40.534049] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:37.058 [2024-07-25 10:20:40.534083] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3702429' to capture a snapshot of events at runtime. 00:05:37.058 [2024-07-25 10:20:40.534092] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:37.058 [2024-07-25 10:20:40.534100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:37.058 [2024-07-25 10:20:40.534107] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3702429 for offline analysis/debug. 00:05:37.058 [2024-07-25 10:20:40.534135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.627 10:20:41 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.627 10:20:41 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:37.627 10:20:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:37.627 10:20:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:37.627 10:20:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:37.627 10:20:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:37.627 10:20:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.627 10:20:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.627 10:20:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.627 ************************************ 00:05:37.627 START TEST rpc_integrity 00:05:37.627 ************************************ 00:05:37.627 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:37.627 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:37.627 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.627 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.627 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.627 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:37.627 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:37.627 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:37.627 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:37.627 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.627 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.627 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.627 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:37.627 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:37.627 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.627 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.627 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.627 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:37.627 { 00:05:37.627 "name": "Malloc0", 00:05:37.627 "aliases": [ 00:05:37.627 "b9bf7f33-cc76-4bb6-a49d-993526b9547f" 00:05:37.627 ], 00:05:37.627 "product_name": "Malloc disk", 00:05:37.627 "block_size": 512, 00:05:37.627 "num_blocks": 16384, 00:05:37.627 "uuid": "b9bf7f33-cc76-4bb6-a49d-993526b9547f", 00:05:37.627 "assigned_rate_limits": { 00:05:37.627 "rw_ios_per_sec": 0, 00:05:37.627 "rw_mbytes_per_sec": 0, 00:05:37.627 "r_mbytes_per_sec": 0, 00:05:37.627 "w_mbytes_per_sec": 0 00:05:37.627 }, 00:05:37.627 "claimed": false, 00:05:37.627 "zoned": false, 00:05:37.627 "supported_io_types": { 00:05:37.627 "read": true, 00:05:37.627 "write": true, 00:05:37.627 "unmap": true, 00:05:37.627 "flush": true, 00:05:37.627 "reset": true, 00:05:37.627 "nvme_admin": false, 00:05:37.627 "nvme_io": false, 00:05:37.627 "nvme_io_md": false, 00:05:37.627 "write_zeroes": true, 00:05:37.627 "zcopy": true, 00:05:37.627 "get_zone_info": false, 00:05:37.627 "zone_management": false, 00:05:37.627 "zone_append": false, 00:05:37.627 "compare": false, 00:05:37.627 "compare_and_write": false, 00:05:37.627 "abort": true, 00:05:37.627 "seek_hole": false, 00:05:37.627 "seek_data": false, 00:05:37.627 "copy": true, 00:05:37.627 "nvme_iov_md": false 00:05:37.627 }, 00:05:37.627 "memory_domains": [ 00:05:37.627 { 00:05:37.627 "dma_device_id": "system", 00:05:37.627 "dma_device_type": 1 00:05:37.627 }, 00:05:37.627 { 00:05:37.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.627 "dma_device_type": 2 00:05:37.627 } 00:05:37.627 ], 00:05:37.627 "driver_specific": {} 00:05:37.627 } 00:05:37.627 ]' 00:05:37.627 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:37.887 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:37.887 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:37.887 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.887 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.887 [2024-07-25 10:20:41.354534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:37.887 [2024-07-25 10:20:41.354566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:37.887 [2024-07-25 10:20:41.354580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1155440 00:05:37.887 [2024-07-25 10:20:41.354588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:37.887 [2024-07-25 10:20:41.355656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:37.887 [2024-07-25 10:20:41.355678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:37.887 Passthru0 00:05:37.887 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.887 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:37.887 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.887 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.887 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.887 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:37.887 { 00:05:37.887 "name": "Malloc0", 00:05:37.887 "aliases": [ 00:05:37.887 "b9bf7f33-cc76-4bb6-a49d-993526b9547f" 00:05:37.887 ], 00:05:37.887 "product_name": "Malloc disk", 00:05:37.887 "block_size": 512, 00:05:37.887 "num_blocks": 16384, 00:05:37.887 "uuid": "b9bf7f33-cc76-4bb6-a49d-993526b9547f", 00:05:37.887 "assigned_rate_limits": { 00:05:37.887 "rw_ios_per_sec": 0, 00:05:37.887 "rw_mbytes_per_sec": 0, 00:05:37.887 "r_mbytes_per_sec": 0, 00:05:37.887 "w_mbytes_per_sec": 0 00:05:37.887 }, 00:05:37.887 "claimed": true, 00:05:37.887 "claim_type": "exclusive_write", 00:05:37.887 "zoned": false, 00:05:37.887 "supported_io_types": { 00:05:37.887 "read": true, 00:05:37.887 "write": true, 00:05:37.887 "unmap": true, 00:05:37.887 "flush": true, 00:05:37.887 "reset": true, 00:05:37.887 "nvme_admin": false, 00:05:37.887 "nvme_io": false, 00:05:37.887 "nvme_io_md": false, 00:05:37.887 "write_zeroes": true, 00:05:37.887 "zcopy": true, 00:05:37.887 "get_zone_info": false, 00:05:37.887 "zone_management": false, 00:05:37.887 "zone_append": false, 00:05:37.887 "compare": false, 00:05:37.887 "compare_and_write": false, 00:05:37.887 "abort": true, 00:05:37.887 "seek_hole": false, 00:05:37.887 "seek_data": false, 00:05:37.887 "copy": true, 00:05:37.887 "nvme_iov_md": false 00:05:37.887 }, 00:05:37.887 "memory_domains": [ 00:05:37.887 { 00:05:37.887 "dma_device_id": "system", 00:05:37.887 "dma_device_type": 1 00:05:37.887 }, 00:05:37.887 { 00:05:37.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.887 "dma_device_type": 2 00:05:37.887 } 00:05:37.887 ], 00:05:37.887 "driver_specific": {} 00:05:37.887 }, 00:05:37.887 { 00:05:37.887 "name": "Passthru0", 00:05:37.887 "aliases": [ 00:05:37.887 "84370d27-c6f7-5bcd-bb90-176c546b0259" 00:05:37.887 ], 00:05:37.887 "product_name": "passthru", 00:05:37.887 "block_size": 512, 00:05:37.887 "num_blocks": 16384, 00:05:37.887 "uuid": "84370d27-c6f7-5bcd-bb90-176c546b0259", 00:05:37.887 "assigned_rate_limits": { 00:05:37.887 "rw_ios_per_sec": 0, 00:05:37.887 "rw_mbytes_per_sec": 0, 00:05:37.887 "r_mbytes_per_sec": 0, 00:05:37.887 "w_mbytes_per_sec": 0 00:05:37.887 }, 00:05:37.887 "claimed": false, 00:05:37.888 "zoned": false, 00:05:37.888 "supported_io_types": { 00:05:37.888 "read": true, 00:05:37.888 "write": true, 00:05:37.888 "unmap": true, 00:05:37.888 "flush": true, 00:05:37.888 "reset": true, 00:05:37.888 "nvme_admin": false, 00:05:37.888 "nvme_io": false, 00:05:37.888 "nvme_io_md": false, 00:05:37.888 "write_zeroes": true, 00:05:37.888 "zcopy": true, 00:05:37.888 "get_zone_info": false, 00:05:37.888 "zone_management": false, 00:05:37.888 "zone_append": false, 00:05:37.888 "compare": false, 00:05:37.888 "compare_and_write": false, 00:05:37.888 "abort": true, 00:05:37.888 "seek_hole": false, 00:05:37.888 "seek_data": false, 00:05:37.888 "copy": true, 00:05:37.888 "nvme_iov_md": false 00:05:37.888 }, 00:05:37.888 "memory_domains": [ 00:05:37.888 { 00:05:37.888 "dma_device_id": "system", 00:05:37.888 "dma_device_type": 1 00:05:37.888 }, 00:05:37.888 { 00:05:37.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.888 "dma_device_type": 2 00:05:37.888 } 00:05:37.888 ], 00:05:37.888 "driver_specific": { 00:05:37.888 "passthru": { 00:05:37.888 "name": "Passthru0", 00:05:37.888 "base_bdev_name": "Malloc0" 00:05:37.888 } 00:05:37.888 } 00:05:37.888 } 00:05:37.888 ]' 00:05:37.888 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:37.888 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:37.888 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:37.888 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.888 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.888 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.888 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:37.888 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.888 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.888 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.888 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.888 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.888 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.888 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.888 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.888 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:37.888 10:20:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.888 00:05:37.888 real 0m0.282s 00:05:37.888 user 0m0.175s 00:05:37.888 sys 0m0.051s 00:05:37.888 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.888 10:20:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.888 ************************************ 00:05:37.888 END TEST rpc_integrity 00:05:37.888 ************************************ 00:05:37.888 10:20:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:37.888 10:20:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.888 10:20:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.888 10:20:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.888 ************************************ 00:05:37.888 START TEST rpc_plugins 00:05:37.888 ************************************ 00:05:37.888 10:20:41 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:37.888 10:20:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:37.888 10:20:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.888 10:20:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.888 10:20:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.888 10:20:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:38.148 10:20:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:38.148 10:20:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.148 10:20:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.148 10:20:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.148 10:20:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:38.148 { 00:05:38.148 "name": "Malloc1", 00:05:38.148 "aliases": [ 00:05:38.148 "294d22c3-06bb-4b80-83ba-65843c7fe955" 00:05:38.148 ], 00:05:38.148 "product_name": "Malloc disk", 00:05:38.148 "block_size": 4096, 00:05:38.148 "num_blocks": 256, 00:05:38.148 "uuid": "294d22c3-06bb-4b80-83ba-65843c7fe955", 00:05:38.148 "assigned_rate_limits": { 00:05:38.148 "rw_ios_per_sec": 0, 00:05:38.148 "rw_mbytes_per_sec": 0, 00:05:38.148 "r_mbytes_per_sec": 0, 00:05:38.148 "w_mbytes_per_sec": 0 00:05:38.148 }, 00:05:38.148 "claimed": false, 00:05:38.148 "zoned": false, 00:05:38.148 "supported_io_types": { 00:05:38.148 "read": true, 00:05:38.148 "write": true, 00:05:38.148 "unmap": true, 00:05:38.148 "flush": true, 00:05:38.148 "reset": true, 00:05:38.148 "nvme_admin": false, 00:05:38.148 "nvme_io": false, 00:05:38.148 "nvme_io_md": false, 00:05:38.148 "write_zeroes": true, 00:05:38.148 "zcopy": true, 00:05:38.148 "get_zone_info": false, 00:05:38.148 "zone_management": false, 00:05:38.148 "zone_append": false, 00:05:38.148 "compare": false, 00:05:38.148 "compare_and_write": false, 00:05:38.148 "abort": true, 00:05:38.148 "seek_hole": false, 00:05:38.148 "seek_data": false, 00:05:38.148 "copy": true, 00:05:38.148 "nvme_iov_md": false 00:05:38.148 }, 00:05:38.148 "memory_domains": [ 00:05:38.148 { 00:05:38.148 "dma_device_id": "system", 00:05:38.148 "dma_device_type": 1 00:05:38.148 }, 00:05:38.148 { 00:05:38.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.148 "dma_device_type": 2 00:05:38.148 } 00:05:38.148 ], 00:05:38.148 "driver_specific": {} 00:05:38.148 } 00:05:38.148 ]' 00:05:38.148 10:20:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:38.148 10:20:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:38.148 10:20:41 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:38.148 10:20:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.148 10:20:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.148 10:20:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.148 10:20:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:38.148 10:20:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.148 10:20:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.148 10:20:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.148 10:20:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:38.148 10:20:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:38.148 10:20:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:38.148 00:05:38.148 real 0m0.143s 00:05:38.148 user 0m0.085s 00:05:38.148 sys 0m0.026s 00:05:38.148 10:20:41 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.148 10:20:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.148 ************************************ 00:05:38.148 END TEST rpc_plugins 00:05:38.148 ************************************ 00:05:38.148 10:20:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:38.148 10:20:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.148 10:20:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.148 10:20:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.148 ************************************ 00:05:38.148 START TEST rpc_trace_cmd_test 00:05:38.148 ************************************ 00:05:38.148 10:20:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:38.148 10:20:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:38.148 10:20:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:38.148 10:20:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.148 10:20:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.148 10:20:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.148 10:20:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:38.148 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3702429", 00:05:38.148 "tpoint_group_mask": "0x8", 00:05:38.148 "iscsi_conn": { 00:05:38.148 "mask": "0x2", 00:05:38.148 "tpoint_mask": "0x0" 00:05:38.148 }, 00:05:38.148 "scsi": { 00:05:38.148 "mask": "0x4", 00:05:38.148 "tpoint_mask": "0x0" 00:05:38.148 }, 00:05:38.148 "bdev": { 00:05:38.148 "mask": "0x8", 00:05:38.148 "tpoint_mask": "0xffffffffffffffff" 00:05:38.148 }, 00:05:38.148 "nvmf_rdma": { 00:05:38.148 "mask": "0x10", 00:05:38.148 "tpoint_mask": "0x0" 00:05:38.148 }, 00:05:38.148 "nvmf_tcp": { 00:05:38.148 "mask": "0x20", 00:05:38.148 "tpoint_mask": "0x0" 00:05:38.148 }, 00:05:38.148 "ftl": { 00:05:38.148 "mask": "0x40", 00:05:38.148 "tpoint_mask": "0x0" 00:05:38.148 }, 00:05:38.148 "blobfs": { 00:05:38.148 "mask": "0x80", 00:05:38.148 "tpoint_mask": "0x0" 00:05:38.148 }, 00:05:38.148 "dsa": { 00:05:38.148 "mask": "0x200", 00:05:38.148 "tpoint_mask": "0x0" 00:05:38.148 }, 00:05:38.148 "thread": { 00:05:38.148 "mask": "0x400", 00:05:38.148 "tpoint_mask": "0x0" 00:05:38.148 }, 00:05:38.148 "nvme_pcie": { 00:05:38.148 "mask": "0x800", 00:05:38.148 "tpoint_mask": "0x0" 00:05:38.148 }, 00:05:38.148 "iaa": { 00:05:38.148 "mask": "0x1000", 00:05:38.148 "tpoint_mask": "0x0" 00:05:38.148 }, 00:05:38.148 "nvme_tcp": { 00:05:38.148 "mask": "0x2000", 00:05:38.148 "tpoint_mask": "0x0" 00:05:38.148 }, 00:05:38.148 "bdev_nvme": { 00:05:38.148 "mask": "0x4000", 00:05:38.148 "tpoint_mask": "0x0" 00:05:38.148 }, 00:05:38.148 "sock": { 00:05:38.148 "mask": "0x8000", 00:05:38.149 "tpoint_mask": "0x0" 00:05:38.149 } 00:05:38.149 }' 00:05:38.149 10:20:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:38.409 10:20:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:38.409 10:20:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:38.409 10:20:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:38.409 10:20:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:38.409 10:20:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:38.409 10:20:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:38.409 10:20:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:38.409 10:20:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:38.409 10:20:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:38.409 00:05:38.409 real 0m0.218s 00:05:38.409 user 0m0.177s 00:05:38.409 sys 0m0.033s 00:05:38.409 10:20:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.409 10:20:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.409 ************************************ 00:05:38.409 END TEST rpc_trace_cmd_test 00:05:38.409 ************************************ 00:05:38.409 10:20:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:38.409 10:20:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:38.409 10:20:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:38.409 10:20:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.409 10:20:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.409 10:20:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.409 ************************************ 00:05:38.409 START TEST rpc_daemon_integrity 00:05:38.409 ************************************ 00:05:38.409 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:38.409 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:38.409 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.409 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.409 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.409 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:38.409 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:38.669 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:38.669 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:38.669 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.669 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.669 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.669 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:38.669 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:38.669 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.669 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.669 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.669 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:38.669 { 00:05:38.669 "name": "Malloc2", 00:05:38.669 "aliases": [ 00:05:38.669 "ad758a42-c336-49a9-958a-a767bdedf6f1" 00:05:38.669 ], 00:05:38.670 "product_name": "Malloc disk", 00:05:38.670 "block_size": 512, 00:05:38.670 "num_blocks": 16384, 00:05:38.670 "uuid": "ad758a42-c336-49a9-958a-a767bdedf6f1", 00:05:38.670 "assigned_rate_limits": { 00:05:38.670 "rw_ios_per_sec": 0, 00:05:38.670 "rw_mbytes_per_sec": 0, 00:05:38.670 "r_mbytes_per_sec": 0, 00:05:38.670 "w_mbytes_per_sec": 0 00:05:38.670 }, 00:05:38.670 "claimed": false, 00:05:38.670 "zoned": false, 00:05:38.670 "supported_io_types": { 00:05:38.670 "read": true, 00:05:38.670 "write": true, 00:05:38.670 "unmap": true, 00:05:38.670 "flush": true, 00:05:38.670 "reset": true, 00:05:38.670 "nvme_admin": false, 00:05:38.670 "nvme_io": false, 00:05:38.670 "nvme_io_md": false, 00:05:38.670 "write_zeroes": true, 00:05:38.670 "zcopy": true, 00:05:38.670 "get_zone_info": false, 00:05:38.670 "zone_management": false, 00:05:38.670 "zone_append": false, 00:05:38.670 "compare": false, 00:05:38.670 "compare_and_write": false, 00:05:38.670 "abort": true, 00:05:38.670 "seek_hole": false, 00:05:38.670 "seek_data": false, 00:05:38.670 "copy": true, 00:05:38.670 "nvme_iov_md": false 00:05:38.670 }, 00:05:38.670 "memory_domains": [ 00:05:38.670 { 00:05:38.670 "dma_device_id": "system", 00:05:38.670 "dma_device_type": 1 00:05:38.670 }, 00:05:38.670 { 00:05:38.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.670 "dma_device_type": 2 00:05:38.670 } 00:05:38.670 ], 00:05:38.670 "driver_specific": {} 00:05:38.670 } 00:05:38.670 ]' 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.670 [2024-07-25 10:20:42.236900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:38.670 [2024-07-25 10:20:42.236929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.670 [2024-07-25 10:20:42.236941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12fae70 00:05:38.670 [2024-07-25 10:20:42.236953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.670 [2024-07-25 10:20:42.237868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.670 [2024-07-25 10:20:42.237891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:38.670 Passthru0 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:38.670 { 00:05:38.670 "name": "Malloc2", 00:05:38.670 "aliases": [ 00:05:38.670 "ad758a42-c336-49a9-958a-a767bdedf6f1" 00:05:38.670 ], 00:05:38.670 "product_name": "Malloc disk", 00:05:38.670 "block_size": 512, 00:05:38.670 "num_blocks": 16384, 00:05:38.670 "uuid": "ad758a42-c336-49a9-958a-a767bdedf6f1", 00:05:38.670 "assigned_rate_limits": { 00:05:38.670 "rw_ios_per_sec": 0, 00:05:38.670 "rw_mbytes_per_sec": 0, 00:05:38.670 "r_mbytes_per_sec": 0, 00:05:38.670 "w_mbytes_per_sec": 0 00:05:38.670 }, 00:05:38.670 "claimed": true, 00:05:38.670 "claim_type": "exclusive_write", 00:05:38.670 "zoned": false, 00:05:38.670 "supported_io_types": { 00:05:38.670 "read": true, 00:05:38.670 "write": true, 00:05:38.670 "unmap": true, 00:05:38.670 "flush": true, 00:05:38.670 "reset": true, 00:05:38.670 "nvme_admin": false, 00:05:38.670 "nvme_io": false, 00:05:38.670 "nvme_io_md": false, 00:05:38.670 "write_zeroes": true, 00:05:38.670 "zcopy": true, 00:05:38.670 "get_zone_info": false, 00:05:38.670 "zone_management": false, 00:05:38.670 "zone_append": false, 00:05:38.670 "compare": false, 00:05:38.670 "compare_and_write": false, 00:05:38.670 "abort": true, 00:05:38.670 "seek_hole": false, 00:05:38.670 "seek_data": false, 00:05:38.670 "copy": true, 00:05:38.670 "nvme_iov_md": false 00:05:38.670 }, 00:05:38.670 "memory_domains": [ 00:05:38.670 { 00:05:38.670 "dma_device_id": "system", 00:05:38.670 "dma_device_type": 1 00:05:38.670 }, 00:05:38.670 { 00:05:38.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.670 "dma_device_type": 2 00:05:38.670 } 00:05:38.670 ], 00:05:38.670 "driver_specific": {} 00:05:38.670 }, 00:05:38.670 { 00:05:38.670 "name": "Passthru0", 00:05:38.670 "aliases": [ 00:05:38.670 "0544f359-3045-50ac-a6fb-4893ba410bad" 00:05:38.670 ], 00:05:38.670 "product_name": "passthru", 00:05:38.670 "block_size": 512, 00:05:38.670 "num_blocks": 16384, 00:05:38.670 "uuid": "0544f359-3045-50ac-a6fb-4893ba410bad", 00:05:38.670 "assigned_rate_limits": { 00:05:38.670 "rw_ios_per_sec": 0, 00:05:38.670 "rw_mbytes_per_sec": 0, 00:05:38.670 "r_mbytes_per_sec": 0, 00:05:38.670 "w_mbytes_per_sec": 0 00:05:38.670 }, 00:05:38.670 "claimed": false, 00:05:38.670 "zoned": false, 00:05:38.670 "supported_io_types": { 00:05:38.670 "read": true, 00:05:38.670 "write": true, 00:05:38.670 "unmap": true, 00:05:38.670 "flush": true, 00:05:38.670 "reset": true, 00:05:38.670 "nvme_admin": false, 00:05:38.670 "nvme_io": false, 00:05:38.670 "nvme_io_md": false, 00:05:38.670 "write_zeroes": true, 00:05:38.670 "zcopy": true, 00:05:38.670 "get_zone_info": false, 00:05:38.670 "zone_management": false, 00:05:38.670 "zone_append": false, 00:05:38.670 "compare": false, 00:05:38.670 "compare_and_write": false, 00:05:38.670 "abort": true, 00:05:38.670 "seek_hole": false, 00:05:38.670 "seek_data": false, 00:05:38.670 "copy": true, 00:05:38.670 "nvme_iov_md": false 00:05:38.670 }, 00:05:38.670 "memory_domains": [ 00:05:38.670 { 00:05:38.670 "dma_device_id": "system", 00:05:38.670 "dma_device_type": 1 00:05:38.670 }, 00:05:38.670 { 00:05:38.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.670 "dma_device_type": 2 00:05:38.670 } 00:05:38.670 ], 00:05:38.670 "driver_specific": { 00:05:38.670 "passthru": { 00:05:38.670 "name": "Passthru0", 00:05:38.670 "base_bdev_name": "Malloc2" 00:05:38.670 } 00:05:38.670 } 00:05:38.670 } 00:05:38.670 ]' 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.670 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:38.671 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.671 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.671 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.671 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:38.671 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.671 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.671 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.671 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:38.671 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:38.931 10:20:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:38.931 00:05:38.931 real 0m0.282s 00:05:38.931 user 0m0.184s 00:05:38.931 sys 0m0.041s 00:05:38.931 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.931 10:20:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.931 ************************************ 00:05:38.931 END TEST rpc_daemon_integrity 00:05:38.931 ************************************ 00:05:38.931 10:20:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:38.931 10:20:42 rpc -- rpc/rpc.sh@84 -- # killprocess 3702429 00:05:38.931 10:20:42 rpc -- common/autotest_common.sh@950 -- # '[' -z 3702429 ']' 00:05:38.931 10:20:42 rpc -- common/autotest_common.sh@954 -- # kill -0 3702429 00:05:38.931 10:20:42 rpc -- common/autotest_common.sh@955 -- # uname 00:05:38.931 10:20:42 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.931 10:20:42 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3702429 00:05:38.931 10:20:42 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.931 10:20:42 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.931 10:20:42 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3702429' 00:05:38.931 killing process with pid 3702429 00:05:38.931 10:20:42 rpc -- common/autotest_common.sh@969 -- # kill 3702429 00:05:38.931 10:20:42 rpc -- common/autotest_common.sh@974 -- # wait 3702429 00:05:39.191 00:05:39.191 real 0m2.532s 00:05:39.191 user 0m3.206s 00:05:39.191 sys 0m0.804s 00:05:39.191 10:20:42 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.191 10:20:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.191 ************************************ 00:05:39.191 END TEST rpc 00:05:39.191 ************************************ 00:05:39.191 10:20:42 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:39.191 10:20:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.191 10:20:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.191 10:20:42 -- common/autotest_common.sh@10 -- # set +x 00:05:39.191 ************************************ 00:05:39.191 START TEST skip_rpc 00:05:39.191 ************************************ 00:05:39.191 10:20:42 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:39.452 * Looking for test storage... 00:05:39.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:39.452 10:20:42 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:39.452 10:20:42 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:39.452 10:20:42 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:39.452 10:20:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.452 10:20:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.452 10:20:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.452 ************************************ 00:05:39.452 START TEST skip_rpc 00:05:39.452 ************************************ 00:05:39.452 10:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:39.452 10:20:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3703132 00:05:39.452 10:20:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.452 10:20:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:39.452 10:20:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:39.452 [2024-07-25 10:20:43.045805] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:39.452 [2024-07-25 10:20:43.045849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3703132 ] 00:05:39.452 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.452 [2024-07-25 10:20:43.114625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.711 [2024-07-25 10:20:43.183661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.988 10:20:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:44.988 10:20:47 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3703132 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3703132 ']' 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3703132 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3703132 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3703132' 00:05:44.988 killing process with pid 3703132 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3703132 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3703132 00:05:44.988 00:05:44.988 real 0m5.371s 00:05:44.988 user 0m5.119s 00:05:44.988 sys 0m0.294s 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.988 10:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.988 ************************************ 00:05:44.988 END TEST skip_rpc 00:05:44.988 ************************************ 00:05:44.988 10:20:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:44.988 10:20:48 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.988 10:20:48 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.988 10:20:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.989 ************************************ 00:05:44.989 START TEST skip_rpc_with_json 00:05:44.989 ************************************ 00:05:44.989 10:20:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:44.989 10:20:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:44.989 10:20:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3703958 00:05:44.989 10:20:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.989 10:20:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.989 10:20:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3703958 00:05:44.989 10:20:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3703958 ']' 00:05:44.989 10:20:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.989 10:20:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.989 10:20:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.989 10:20:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.989 10:20:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.989 [2024-07-25 10:20:48.506643] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:44.989 [2024-07-25 10:20:48.506688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3703958 ] 00:05:44.989 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.989 [2024-07-25 10:20:48.579344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.989 [2024-07-25 10:20:48.649910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.928 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.928 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:45.928 10:20:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:45.928 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.928 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.928 [2024-07-25 10:20:49.295614] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:45.928 request: 00:05:45.928 { 00:05:45.928 "trtype": "tcp", 00:05:45.928 "method": "nvmf_get_transports", 00:05:45.928 "req_id": 1 00:05:45.928 } 00:05:45.928 Got JSON-RPC error response 00:05:45.928 response: 00:05:45.928 { 00:05:45.928 "code": -19, 00:05:45.928 "message": "No such device" 00:05:45.928 } 00:05:45.928 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:45.928 10:20:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:45.928 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.928 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.928 [2024-07-25 10:20:49.303708] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.928 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.928 10:20:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:45.928 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.928 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.928 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.928 10:20:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:45.928 { 00:05:45.928 "subsystems": [ 00:05:45.928 { 00:05:45.928 "subsystem": "vfio_user_target", 00:05:45.928 "config": null 00:05:45.928 }, 00:05:45.928 { 00:05:45.928 "subsystem": "keyring", 00:05:45.928 "config": [] 00:05:45.928 }, 00:05:45.928 { 00:05:45.928 "subsystem": "iobuf", 00:05:45.928 "config": [ 00:05:45.928 { 00:05:45.928 "method": "iobuf_set_options", 00:05:45.928 "params": { 00:05:45.928 "small_pool_count": 8192, 00:05:45.928 "large_pool_count": 1024, 00:05:45.928 "small_bufsize": 8192, 00:05:45.928 "large_bufsize": 135168 00:05:45.928 } 00:05:45.928 } 00:05:45.928 ] 00:05:45.928 }, 00:05:45.928 { 00:05:45.928 "subsystem": "sock", 00:05:45.928 "config": [ 00:05:45.928 { 00:05:45.928 "method": "sock_set_default_impl", 00:05:45.928 "params": { 00:05:45.928 "impl_name": "posix" 00:05:45.928 } 00:05:45.928 }, 00:05:45.928 { 00:05:45.928 "method": "sock_impl_set_options", 00:05:45.928 "params": { 00:05:45.928 "impl_name": "ssl", 00:05:45.928 "recv_buf_size": 4096, 00:05:45.928 "send_buf_size": 4096, 00:05:45.928 "enable_recv_pipe": true, 00:05:45.928 "enable_quickack": false, 00:05:45.928 "enable_placement_id": 0, 00:05:45.928 "enable_zerocopy_send_server": true, 00:05:45.928 "enable_zerocopy_send_client": false, 00:05:45.928 "zerocopy_threshold": 0, 00:05:45.928 "tls_version": 0, 00:05:45.928 "enable_ktls": false 00:05:45.928 } 00:05:45.928 }, 00:05:45.928 { 00:05:45.928 "method": "sock_impl_set_options", 00:05:45.928 "params": { 00:05:45.928 "impl_name": "posix", 00:05:45.928 "recv_buf_size": 2097152, 00:05:45.928 "send_buf_size": 2097152, 00:05:45.928 "enable_recv_pipe": true, 00:05:45.928 "enable_quickack": false, 00:05:45.928 "enable_placement_id": 0, 00:05:45.928 "enable_zerocopy_send_server": true, 00:05:45.928 "enable_zerocopy_send_client": false, 00:05:45.928 "zerocopy_threshold": 0, 00:05:45.928 "tls_version": 0, 00:05:45.928 "enable_ktls": false 00:05:45.928 } 00:05:45.928 } 00:05:45.928 ] 00:05:45.928 }, 00:05:45.928 { 00:05:45.928 "subsystem": "vmd", 00:05:45.928 "config": [] 00:05:45.928 }, 00:05:45.928 { 00:05:45.928 "subsystem": "accel", 00:05:45.928 "config": [ 00:05:45.928 { 00:05:45.928 "method": "accel_set_options", 00:05:45.928 "params": { 00:05:45.928 "small_cache_size": 128, 00:05:45.928 "large_cache_size": 16, 00:05:45.928 "task_count": 2048, 00:05:45.928 "sequence_count": 2048, 00:05:45.928 "buf_count": 2048 00:05:45.928 } 00:05:45.928 } 00:05:45.928 ] 00:05:45.928 }, 00:05:45.928 { 00:05:45.928 "subsystem": "bdev", 00:05:45.928 "config": [ 00:05:45.928 { 00:05:45.928 "method": "bdev_set_options", 00:05:45.928 "params": { 00:05:45.928 "bdev_io_pool_size": 65535, 00:05:45.928 "bdev_io_cache_size": 256, 00:05:45.928 "bdev_auto_examine": true, 00:05:45.928 "iobuf_small_cache_size": 128, 00:05:45.928 "iobuf_large_cache_size": 16 00:05:45.928 } 00:05:45.928 }, 00:05:45.928 { 00:05:45.928 "method": "bdev_raid_set_options", 00:05:45.928 "params": { 00:05:45.928 "process_window_size_kb": 1024, 00:05:45.928 "process_max_bandwidth_mb_sec": 0 00:05:45.928 } 00:05:45.928 }, 00:05:45.928 { 00:05:45.928 "method": "bdev_iscsi_set_options", 00:05:45.928 "params": { 00:05:45.928 "timeout_sec": 30 00:05:45.928 } 00:05:45.928 }, 00:05:45.928 { 00:05:45.928 "method": "bdev_nvme_set_options", 00:05:45.928 "params": { 00:05:45.928 "action_on_timeout": "none", 00:05:45.928 "timeout_us": 0, 00:05:45.928 "timeout_admin_us": 0, 00:05:45.928 "keep_alive_timeout_ms": 10000, 00:05:45.928 "arbitration_burst": 0, 00:05:45.928 "low_priority_weight": 0, 00:05:45.928 "medium_priority_weight": 0, 00:05:45.928 "high_priority_weight": 0, 00:05:45.928 "nvme_adminq_poll_period_us": 10000, 00:05:45.928 "nvme_ioq_poll_period_us": 0, 00:05:45.928 "io_queue_requests": 0, 00:05:45.928 "delay_cmd_submit": true, 00:05:45.928 "transport_retry_count": 4, 00:05:45.928 "bdev_retry_count": 3, 00:05:45.928 "transport_ack_timeout": 0, 00:05:45.928 "ctrlr_loss_timeout_sec": 0, 00:05:45.928 "reconnect_delay_sec": 0, 00:05:45.928 "fast_io_fail_timeout_sec": 0, 00:05:45.928 "disable_auto_failback": false, 00:05:45.928 "generate_uuids": false, 00:05:45.928 "transport_tos": 0, 00:05:45.928 "nvme_error_stat": false, 00:05:45.928 "rdma_srq_size": 0, 00:05:45.928 "io_path_stat": false, 00:05:45.928 "allow_accel_sequence": false, 00:05:45.928 "rdma_max_cq_size": 0, 00:05:45.928 "rdma_cm_event_timeout_ms": 0, 00:05:45.928 "dhchap_digests": [ 00:05:45.928 "sha256", 00:05:45.928 "sha384", 00:05:45.928 "sha512" 00:05:45.928 ], 00:05:45.928 "dhchap_dhgroups": [ 00:05:45.928 "null", 00:05:45.929 "ffdhe2048", 00:05:45.929 "ffdhe3072", 00:05:45.929 "ffdhe4096", 00:05:45.929 "ffdhe6144", 00:05:45.929 "ffdhe8192" 00:05:45.929 ] 00:05:45.929 } 00:05:45.929 }, 00:05:45.929 { 00:05:45.929 "method": "bdev_nvme_set_hotplug", 00:05:45.929 "params": { 00:05:45.929 "period_us": 100000, 00:05:45.929 "enable": false 00:05:45.929 } 00:05:45.929 }, 00:05:45.929 { 00:05:45.929 "method": "bdev_wait_for_examine" 00:05:45.929 } 00:05:45.929 ] 00:05:45.929 }, 00:05:45.929 { 00:05:45.929 "subsystem": "scsi", 00:05:45.929 "config": null 00:05:45.929 }, 00:05:45.929 { 00:05:45.929 "subsystem": "scheduler", 00:05:45.929 "config": [ 00:05:45.929 { 00:05:45.929 "method": "framework_set_scheduler", 00:05:45.929 "params": { 00:05:45.929 "name": "static" 00:05:45.929 } 00:05:45.929 } 00:05:45.929 ] 00:05:45.929 }, 00:05:45.929 { 00:05:45.929 "subsystem": "vhost_scsi", 00:05:45.929 "config": [] 00:05:45.929 }, 00:05:45.929 { 00:05:45.929 "subsystem": "vhost_blk", 00:05:45.929 "config": [] 00:05:45.929 }, 00:05:45.929 { 00:05:45.929 "subsystem": "ublk", 00:05:45.929 "config": [] 00:05:45.929 }, 00:05:45.929 { 00:05:45.929 "subsystem": "nbd", 00:05:45.929 "config": [] 00:05:45.929 }, 00:05:45.929 { 00:05:45.929 "subsystem": "nvmf", 00:05:45.929 "config": [ 00:05:45.929 { 00:05:45.929 "method": "nvmf_set_config", 00:05:45.929 "params": { 00:05:45.929 "discovery_filter": "match_any", 00:05:45.929 "admin_cmd_passthru": { 00:05:45.929 "identify_ctrlr": false 00:05:45.929 } 00:05:45.929 } 00:05:45.929 }, 00:05:45.929 { 00:05:45.929 "method": "nvmf_set_max_subsystems", 00:05:45.929 "params": { 00:05:45.929 "max_subsystems": 1024 00:05:45.929 } 00:05:45.929 }, 00:05:45.929 { 00:05:45.929 "method": "nvmf_set_crdt", 00:05:45.929 "params": { 00:05:45.929 "crdt1": 0, 00:05:45.929 "crdt2": 0, 00:05:45.929 "crdt3": 0 00:05:45.929 } 00:05:45.929 }, 00:05:45.929 { 00:05:45.929 "method": "nvmf_create_transport", 00:05:45.929 "params": { 00:05:45.929 "trtype": "TCP", 00:05:45.929 "max_queue_depth": 128, 00:05:45.929 "max_io_qpairs_per_ctrlr": 127, 00:05:45.929 "in_capsule_data_size": 4096, 00:05:45.929 "max_io_size": 131072, 00:05:45.929 "io_unit_size": 131072, 00:05:45.929 "max_aq_depth": 128, 00:05:45.929 "num_shared_buffers": 511, 00:05:45.929 "buf_cache_size": 4294967295, 00:05:45.929 "dif_insert_or_strip": false, 00:05:45.929 "zcopy": false, 00:05:45.929 "c2h_success": true, 00:05:45.929 "sock_priority": 0, 00:05:45.929 "abort_timeout_sec": 1, 00:05:45.929 "ack_timeout": 0, 00:05:45.929 "data_wr_pool_size": 0 00:05:45.929 } 00:05:45.929 } 00:05:45.929 ] 00:05:45.929 }, 00:05:45.929 { 00:05:45.929 "subsystem": "iscsi", 00:05:45.929 "config": [ 00:05:45.929 { 00:05:45.929 "method": "iscsi_set_options", 00:05:45.929 "params": { 00:05:45.929 "node_base": "iqn.2016-06.io.spdk", 00:05:45.929 "max_sessions": 128, 00:05:45.929 "max_connections_per_session": 2, 00:05:45.929 "max_queue_depth": 64, 00:05:45.929 "default_time2wait": 2, 00:05:45.929 "default_time2retain": 20, 00:05:45.929 "first_burst_length": 8192, 00:05:45.929 "immediate_data": true, 00:05:45.929 "allow_duplicated_isid": false, 00:05:45.929 "error_recovery_level": 0, 00:05:45.929 "nop_timeout": 60, 00:05:45.929 "nop_in_interval": 30, 00:05:45.929 "disable_chap": false, 00:05:45.929 "require_chap": false, 00:05:45.929 "mutual_chap": false, 00:05:45.929 "chap_group": 0, 00:05:45.929 "max_large_datain_per_connection": 64, 00:05:45.929 "max_r2t_per_connection": 4, 00:05:45.929 "pdu_pool_size": 36864, 00:05:45.929 "immediate_data_pool_size": 16384, 00:05:45.929 "data_out_pool_size": 2048 00:05:45.929 } 00:05:45.929 } 00:05:45.929 ] 00:05:45.929 } 00:05:45.929 ] 00:05:45.929 } 00:05:45.929 10:20:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:45.929 10:20:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3703958 00:05:45.929 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3703958 ']' 00:05:45.929 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3703958 00:05:45.929 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:45.929 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.929 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3703958 00:05:45.929 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.929 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.929 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3703958' 00:05:45.929 killing process with pid 3703958 00:05:45.929 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3703958 00:05:45.929 10:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3703958 00:05:46.189 10:20:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3704228 00:05:46.189 10:20:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:46.189 10:20:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:51.480 10:20:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3704228 00:05:51.480 10:20:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3704228 ']' 00:05:51.480 10:20:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3704228 00:05:51.480 10:20:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:51.480 10:20:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.480 10:20:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3704228 00:05:51.480 10:20:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.480 10:20:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.480 10:20:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3704228' 00:05:51.480 killing process with pid 3704228 00:05:51.480 10:20:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3704228 00:05:51.480 10:20:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3704228 00:05:51.480 10:20:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:51.480 10:20:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:51.480 00:05:51.480 real 0m6.728s 00:05:51.480 user 0m6.506s 00:05:51.480 sys 0m0.627s 00:05:51.480 10:20:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.480 10:20:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.480 ************************************ 00:05:51.480 END TEST skip_rpc_with_json 00:05:51.480 ************************************ 00:05:51.740 10:20:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:51.740 10:20:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.740 10:20:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.740 10:20:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.740 ************************************ 00:05:51.740 START TEST skip_rpc_with_delay 00:05:51.740 ************************************ 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.740 [2024-07-25 10:20:55.302866] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:51.740 [2024-07-25 10:20:55.302927] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:51.740 00:05:51.740 real 0m0.062s 00:05:51.740 user 0m0.038s 00:05:51.740 sys 0m0.023s 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.740 10:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:51.740 ************************************ 00:05:51.740 END TEST skip_rpc_with_delay 00:05:51.741 ************************************ 00:05:51.741 10:20:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:51.741 10:20:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:51.741 10:20:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:51.741 10:20:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.741 10:20:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.741 10:20:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.741 ************************************ 00:05:51.741 START TEST exit_on_failed_rpc_init 00:05:51.741 ************************************ 00:05:51.741 10:20:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:51.741 10:20:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3705331 00:05:51.741 10:20:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3705331 00:05:51.741 10:20:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.741 10:20:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3705331 ']' 00:05:51.741 10:20:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.741 10:20:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.741 10:20:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.741 10:20:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.741 10:20:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.001 [2024-07-25 10:20:55.454468] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:52.001 [2024-07-25 10:20:55.454513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3705331 ] 00:05:52.001 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.001 [2024-07-25 10:20:55.523420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.001 [2024-07-25 10:20:55.589332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.571 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.571 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:52.571 10:20:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.571 10:20:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.571 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:52.571 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.571 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.571 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.571 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.571 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.571 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.571 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.572 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.572 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:52.572 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.832 [2024-07-25 10:20:56.287984] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:52.832 [2024-07-25 10:20:56.288035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3705366 ] 00:05:52.832 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.832 [2024-07-25 10:20:56.356552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.832 [2024-07-25 10:20:56.426310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.832 [2024-07-25 10:20:56.426379] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:52.832 [2024-07-25 10:20:56.426390] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:52.832 [2024-07-25 10:20:56.426399] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.832 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:52.832 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.832 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:52.832 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:52.832 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:52.832 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.832 10:20:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:52.832 10:20:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3705331 00:05:52.832 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3705331 ']' 00:05:52.832 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3705331 00:05:52.832 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:52.832 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.832 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3705331 00:05:53.092 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.092 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.092 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3705331' 00:05:53.092 killing process with pid 3705331 00:05:53.092 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3705331 00:05:53.092 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3705331 00:05:53.388 00:05:53.388 real 0m1.464s 00:05:53.388 user 0m1.639s 00:05:53.388 sys 0m0.448s 00:05:53.388 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.389 10:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.389 ************************************ 00:05:53.389 END TEST exit_on_failed_rpc_init 00:05:53.389 ************************************ 00:05:53.389 10:20:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:53.389 00:05:53.389 real 0m14.053s 00:05:53.389 user 0m13.471s 00:05:53.389 sys 0m1.686s 00:05:53.389 10:20:56 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.389 10:20:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.389 ************************************ 00:05:53.389 END TEST skip_rpc 00:05:53.389 ************************************ 00:05:53.389 10:20:56 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:53.389 10:20:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.389 10:20:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.389 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:05:53.389 ************************************ 00:05:53.389 START TEST rpc_client 00:05:53.389 ************************************ 00:05:53.389 10:20:56 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:53.389 * Looking for test storage... 00:05:53.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:53.649 10:20:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:53.649 OK 00:05:53.649 10:20:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:53.649 00:05:53.649 real 0m0.134s 00:05:53.649 user 0m0.061s 00:05:53.649 sys 0m0.083s 00:05:53.649 10:20:57 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.649 10:20:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:53.649 ************************************ 00:05:53.649 END TEST rpc_client 00:05:53.649 ************************************ 00:05:53.649 10:20:57 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:53.649 10:20:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.649 10:20:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.649 10:20:57 -- common/autotest_common.sh@10 -- # set +x 00:05:53.649 ************************************ 00:05:53.649 START TEST json_config 00:05:53.649 ************************************ 00:05:53.649 10:20:57 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:53.649 10:20:57 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.649 10:20:57 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.649 10:20:57 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.649 10:20:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.649 10:20:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.649 10:20:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.649 10:20:57 json_config -- paths/export.sh@5 -- # export PATH 00:05:53.649 10:20:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@47 -- # : 0 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:53.649 10:20:57 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:53.649 INFO: JSON configuration test init 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:53.649 10:20:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:53.649 10:20:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:53.649 10:20:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:53.649 10:20:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.649 10:20:57 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:53.649 10:20:57 json_config -- json_config/common.sh@9 -- # local app=target 00:05:53.649 10:20:57 json_config -- json_config/common.sh@10 -- # shift 00:05:53.649 10:20:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:53.649 10:20:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:53.649 10:20:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:53.649 10:20:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.649 10:20:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.649 10:20:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3705720 00:05:53.649 10:20:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:53.649 Waiting for target to run... 00:05:53.649 10:20:57 json_config -- json_config/common.sh@25 -- # waitforlisten 3705720 /var/tmp/spdk_tgt.sock 00:05:53.649 10:20:57 json_config -- common/autotest_common.sh@831 -- # '[' -z 3705720 ']' 00:05:53.649 10:20:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:53.649 10:20:57 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:53.649 10:20:57 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.649 10:20:57 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:53.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:53.649 10:20:57 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.649 10:20:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.909 [2024-07-25 10:20:57.380099] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:05:53.909 [2024-07-25 10:20:57.380149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3705720 ] 00:05:53.909 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.168 [2024-07-25 10:20:57.820877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.427 [2024-07-25 10:20:57.908169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.687 10:20:58 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.687 10:20:58 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:54.687 10:20:58 json_config -- json_config/common.sh@26 -- # echo '' 00:05:54.687 00:05:54.687 10:20:58 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:54.687 10:20:58 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:54.687 10:20:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:54.687 10:20:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.687 10:20:58 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:54.687 10:20:58 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:54.687 10:20:58 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:54.687 10:20:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.687 10:20:58 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:54.687 10:20:58 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:54.687 10:20:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:57.981 10:21:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:57.981 10:21:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:57.981 10:21:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@51 -- # sort 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:57.981 10:21:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:57.981 10:21:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:57.981 10:21:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:57.981 10:21:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:57.981 10:21:01 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:57.981 10:21:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:58.241 MallocForNvmf0 00:05:58.241 10:21:01 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:58.241 10:21:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:58.241 MallocForNvmf1 00:05:58.241 10:21:01 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:58.241 10:21:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:58.500 [2024-07-25 10:21:02.060680] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:58.500 10:21:02 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:58.500 10:21:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:58.760 10:21:02 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:58.760 10:21:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:58.760 10:21:02 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:58.760 10:21:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:59.019 10:21:02 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:59.020 10:21:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:59.280 [2024-07-25 10:21:02.742834] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:59.280 10:21:02 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:59.280 10:21:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:59.280 10:21:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.280 10:21:02 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:59.280 10:21:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:59.280 10:21:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.280 10:21:02 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:59.280 10:21:02 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:59.280 10:21:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:59.539 MallocBdevForConfigChangeCheck 00:05:59.539 10:21:03 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:59.539 10:21:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:59.539 10:21:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.539 10:21:03 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:59.539 10:21:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:59.806 10:21:03 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:59.806 INFO: shutting down applications... 00:05:59.806 10:21:03 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:59.806 10:21:03 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:59.806 10:21:03 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:59.806 10:21:03 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:02.348 Calling clear_iscsi_subsystem 00:06:02.348 Calling clear_nvmf_subsystem 00:06:02.348 Calling clear_nbd_subsystem 00:06:02.348 Calling clear_ublk_subsystem 00:06:02.348 Calling clear_vhost_blk_subsystem 00:06:02.348 Calling clear_vhost_scsi_subsystem 00:06:02.348 Calling clear_bdev_subsystem 00:06:02.348 10:21:05 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:02.348 10:21:05 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:02.348 10:21:05 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:02.348 10:21:05 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:02.348 10:21:05 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:02.348 10:21:05 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:02.348 10:21:05 json_config -- json_config/json_config.sh@349 -- # break 00:06:02.348 10:21:05 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:02.348 10:21:05 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:02.348 10:21:05 json_config -- json_config/common.sh@31 -- # local app=target 00:06:02.348 10:21:05 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:02.348 10:21:05 json_config -- json_config/common.sh@35 -- # [[ -n 3705720 ]] 00:06:02.348 10:21:05 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3705720 00:06:02.348 10:21:05 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:02.348 10:21:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.348 10:21:05 json_config -- json_config/common.sh@41 -- # kill -0 3705720 00:06:02.348 10:21:05 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:02.608 10:21:06 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:02.608 10:21:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.608 10:21:06 json_config -- json_config/common.sh@41 -- # kill -0 3705720 00:06:02.608 10:21:06 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:02.608 10:21:06 json_config -- json_config/common.sh@43 -- # break 00:06:02.608 10:21:06 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:02.608 10:21:06 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:02.608 SPDK target shutdown done 00:06:02.608 10:21:06 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:02.608 INFO: relaunching applications... 00:06:02.608 10:21:06 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:02.608 10:21:06 json_config -- json_config/common.sh@9 -- # local app=target 00:06:02.608 10:21:06 json_config -- json_config/common.sh@10 -- # shift 00:06:02.608 10:21:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:02.608 10:21:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:02.608 10:21:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:02.608 10:21:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.608 10:21:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.608 10:21:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3707984 00:06:02.608 10:21:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:02.608 Waiting for target to run... 00:06:02.608 10:21:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:02.608 10:21:06 json_config -- json_config/common.sh@25 -- # waitforlisten 3707984 /var/tmp/spdk_tgt.sock 00:06:02.608 10:21:06 json_config -- common/autotest_common.sh@831 -- # '[' -z 3707984 ']' 00:06:02.608 10:21:06 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:02.608 10:21:06 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.608 10:21:06 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:02.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:02.608 10:21:06 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.608 10:21:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.868 [2024-07-25 10:21:06.333467] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:02.868 [2024-07-25 10:21:06.333527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3707984 ] 00:06:02.868 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.127 [2024-07-25 10:21:06.772321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.387 [2024-07-25 10:21:06.860889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.677 [2024-07-25 10:21:09.892488] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:06.677 [2024-07-25 10:21:09.924855] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:06.936 10:21:10 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.936 10:21:10 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:06.936 10:21:10 json_config -- json_config/common.sh@26 -- # echo '' 00:06:06.936 00:06:06.936 10:21:10 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:06.936 10:21:10 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:06.936 INFO: Checking if target configuration is the same... 00:06:06.936 10:21:10 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.936 10:21:10 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:06.936 10:21:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:06.936 + '[' 2 -ne 2 ']' 00:06:06.936 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:06.936 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:06.936 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:06.936 +++ basename /dev/fd/62 00:06:06.936 ++ mktemp /tmp/62.XXX 00:06:06.936 + tmp_file_1=/tmp/62.Cfn 00:06:06.936 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.936 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:06.936 + tmp_file_2=/tmp/spdk_tgt_config.json.vJl 00:06:06.936 + ret=0 00:06:06.936 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.195 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.195 + diff -u /tmp/62.Cfn /tmp/spdk_tgt_config.json.vJl 00:06:07.195 + echo 'INFO: JSON config files are the same' 00:06:07.195 INFO: JSON config files are the same 00:06:07.195 + rm /tmp/62.Cfn /tmp/spdk_tgt_config.json.vJl 00:06:07.195 + exit 0 00:06:07.195 10:21:10 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:07.196 10:21:10 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:07.196 INFO: changing configuration and checking if this can be detected... 00:06:07.196 10:21:10 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:07.196 10:21:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:07.455 10:21:11 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.455 10:21:11 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:07.455 10:21:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.455 + '[' 2 -ne 2 ']' 00:06:07.455 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:07.455 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:07.455 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:07.455 +++ basename /dev/fd/62 00:06:07.455 ++ mktemp /tmp/62.XXX 00:06:07.455 + tmp_file_1=/tmp/62.BCg 00:06:07.455 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.455 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:07.455 + tmp_file_2=/tmp/spdk_tgt_config.json.GzI 00:06:07.455 + ret=0 00:06:07.455 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.713 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.713 + diff -u /tmp/62.BCg /tmp/spdk_tgt_config.json.GzI 00:06:07.713 + ret=1 00:06:07.713 + echo '=== Start of file: /tmp/62.BCg ===' 00:06:07.713 + cat /tmp/62.BCg 00:06:07.713 + echo '=== End of file: /tmp/62.BCg ===' 00:06:07.713 + echo '' 00:06:07.713 + echo '=== Start of file: /tmp/spdk_tgt_config.json.GzI ===' 00:06:07.713 + cat /tmp/spdk_tgt_config.json.GzI 00:06:07.713 + echo '=== End of file: /tmp/spdk_tgt_config.json.GzI ===' 00:06:07.713 + echo '' 00:06:07.713 + rm /tmp/62.BCg /tmp/spdk_tgt_config.json.GzI 00:06:07.713 + exit 1 00:06:07.713 10:21:11 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:07.713 INFO: configuration change detected. 00:06:07.713 10:21:11 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:07.713 10:21:11 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:07.713 10:21:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:07.713 10:21:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.713 10:21:11 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:07.713 10:21:11 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:07.713 10:21:11 json_config -- json_config/json_config.sh@321 -- # [[ -n 3707984 ]] 00:06:07.713 10:21:11 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:07.713 10:21:11 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:07.713 10:21:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:07.713 10:21:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.713 10:21:11 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:07.713 10:21:11 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:07.713 10:21:11 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:07.713 10:21:11 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:07.713 10:21:11 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:07.713 10:21:11 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:07.713 10:21:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:07.713 10:21:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.971 10:21:11 json_config -- json_config/json_config.sh@327 -- # killprocess 3707984 00:06:07.971 10:21:11 json_config -- common/autotest_common.sh@950 -- # '[' -z 3707984 ']' 00:06:07.972 10:21:11 json_config -- common/autotest_common.sh@954 -- # kill -0 3707984 00:06:07.972 10:21:11 json_config -- common/autotest_common.sh@955 -- # uname 00:06:07.972 10:21:11 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.972 10:21:11 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3707984 00:06:07.972 10:21:11 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.972 10:21:11 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.972 10:21:11 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3707984' 00:06:07.972 killing process with pid 3707984 00:06:07.972 10:21:11 json_config -- common/autotest_common.sh@969 -- # kill 3707984 00:06:07.972 10:21:11 json_config -- common/autotest_common.sh@974 -- # wait 3707984 00:06:10.532 10:21:13 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.532 10:21:13 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:10.532 10:21:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:10.532 10:21:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.532 10:21:13 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:10.532 10:21:13 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:10.532 INFO: Success 00:06:10.532 00:06:10.532 real 0m16.430s 00:06:10.532 user 0m16.875s 00:06:10.532 sys 0m2.313s 00:06:10.532 10:21:13 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.532 10:21:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.532 ************************************ 00:06:10.532 END TEST json_config 00:06:10.532 ************************************ 00:06:10.532 10:21:13 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.532 10:21:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.532 10:21:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.532 10:21:13 -- common/autotest_common.sh@10 -- # set +x 00:06:10.532 ************************************ 00:06:10.532 START TEST json_config_extra_key 00:06:10.532 ************************************ 00:06:10.533 10:21:13 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.533 10:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.533 10:21:13 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.533 10:21:13 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.533 10:21:13 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.533 10:21:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.533 10:21:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.533 10:21:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.533 10:21:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:10.533 10:21:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:10.533 10:21:13 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:10.533 10:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:10.533 10:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:10.533 10:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:10.533 10:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:10.533 10:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:10.533 10:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:10.533 10:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:10.533 10:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:10.533 10:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:10.533 10:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.533 10:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:10.533 INFO: launching applications... 00:06:10.533 10:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.533 10:21:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:10.533 10:21:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:10.533 10:21:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.533 10:21:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.533 10:21:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.533 10:21:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.533 10:21:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.533 10:21:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3709424 00:06:10.533 10:21:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.533 Waiting for target to run... 00:06:10.533 10:21:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3709424 /var/tmp/spdk_tgt.sock 00:06:10.533 10:21:13 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3709424 ']' 00:06:10.533 10:21:13 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.533 10:21:13 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.533 10:21:13 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.533 10:21:13 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.533 10:21:13 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.533 10:21:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.533 [2024-07-25 10:21:13.883348] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:10.533 [2024-07-25 10:21:13.883403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3709424 ] 00:06:10.533 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.797 [2024-07-25 10:21:14.317379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.798 [2024-07-25 10:21:14.405952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.056 10:21:14 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.056 10:21:14 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:11.056 10:21:14 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:11.056 00:06:11.056 10:21:14 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:11.056 INFO: shutting down applications... 00:06:11.056 10:21:14 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:11.056 10:21:14 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:11.056 10:21:14 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:11.056 10:21:14 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3709424 ]] 00:06:11.056 10:21:14 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3709424 00:06:11.056 10:21:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:11.056 10:21:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.056 10:21:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3709424 00:06:11.056 10:21:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.625 10:21:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.625 10:21:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.625 10:21:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3709424 00:06:11.625 10:21:15 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:11.625 10:21:15 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:11.625 10:21:15 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:11.625 10:21:15 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:11.625 SPDK target shutdown done 00:06:11.625 10:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:11.625 Success 00:06:11.625 00:06:11.625 real 0m1.449s 00:06:11.625 user 0m1.032s 00:06:11.625 sys 0m0.559s 00:06:11.625 10:21:15 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.625 10:21:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.625 ************************************ 00:06:11.625 END TEST json_config_extra_key 00:06:11.625 ************************************ 00:06:11.625 10:21:15 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.625 10:21:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.625 10:21:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.625 10:21:15 -- common/autotest_common.sh@10 -- # set +x 00:06:11.625 ************************************ 00:06:11.625 START TEST alias_rpc 00:06:11.625 ************************************ 00:06:11.625 10:21:15 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.885 * Looking for test storage... 00:06:11.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:11.885 10:21:15 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.885 10:21:15 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3709738 00:06:11.885 10:21:15 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.885 10:21:15 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3709738 00:06:11.885 10:21:15 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3709738 ']' 00:06:11.885 10:21:15 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.885 10:21:15 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.885 10:21:15 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.885 10:21:15 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.885 10:21:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.885 [2024-07-25 10:21:15.402141] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:11.885 [2024-07-25 10:21:15.402188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3709738 ] 00:06:11.885 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.885 [2024-07-25 10:21:15.470706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.885 [2024-07-25 10:21:15.540598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.822 10:21:16 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.822 10:21:16 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:12.822 10:21:16 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:12.822 10:21:16 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3709738 00:06:12.822 10:21:16 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3709738 ']' 00:06:12.822 10:21:16 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3709738 00:06:12.822 10:21:16 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:12.822 10:21:16 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.822 10:21:16 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3709738 00:06:12.822 10:21:16 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.822 10:21:16 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.822 10:21:16 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3709738' 00:06:12.822 killing process with pid 3709738 00:06:12.822 10:21:16 alias_rpc -- common/autotest_common.sh@969 -- # kill 3709738 00:06:12.822 10:21:16 alias_rpc -- common/autotest_common.sh@974 -- # wait 3709738 00:06:13.081 00:06:13.081 real 0m1.508s 00:06:13.081 user 0m1.627s 00:06:13.081 sys 0m0.436s 00:06:13.081 10:21:16 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.081 10:21:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.081 ************************************ 00:06:13.081 END TEST alias_rpc 00:06:13.081 ************************************ 00:06:13.341 10:21:16 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:13.341 10:21:16 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:13.341 10:21:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.341 10:21:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.341 10:21:16 -- common/autotest_common.sh@10 -- # set +x 00:06:13.341 ************************************ 00:06:13.341 START TEST spdkcli_tcp 00:06:13.341 ************************************ 00:06:13.341 10:21:16 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:13.341 * Looking for test storage... 00:06:13.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:13.341 10:21:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:13.341 10:21:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:13.341 10:21:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:13.341 10:21:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:13.341 10:21:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:13.341 10:21:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:13.341 10:21:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:13.341 10:21:16 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.341 10:21:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.341 10:21:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3710059 00:06:13.341 10:21:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3710059 00:06:13.341 10:21:16 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3710059 ']' 00:06:13.341 10:21:16 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.341 10:21:16 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.341 10:21:16 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.341 10:21:16 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.341 10:21:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.341 10:21:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:13.341 [2024-07-25 10:21:17.008243] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:13.341 [2024-07-25 10:21:17.008298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710059 ] 00:06:13.341 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.601 [2024-07-25 10:21:17.076794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.601 [2024-07-25 10:21:17.151153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.601 [2024-07-25 10:21:17.151155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.169 10:21:17 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.169 10:21:17 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:14.169 10:21:17 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3710137 00:06:14.169 10:21:17 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:14.169 10:21:17 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:14.430 [ 00:06:14.430 "bdev_malloc_delete", 00:06:14.430 "bdev_malloc_create", 00:06:14.430 "bdev_null_resize", 00:06:14.430 "bdev_null_delete", 00:06:14.430 "bdev_null_create", 00:06:14.430 "bdev_nvme_cuse_unregister", 00:06:14.430 "bdev_nvme_cuse_register", 00:06:14.430 "bdev_opal_new_user", 00:06:14.430 "bdev_opal_set_lock_state", 00:06:14.430 "bdev_opal_delete", 00:06:14.430 "bdev_opal_get_info", 00:06:14.430 "bdev_opal_create", 00:06:14.430 "bdev_nvme_opal_revert", 00:06:14.430 "bdev_nvme_opal_init", 00:06:14.430 "bdev_nvme_send_cmd", 00:06:14.430 "bdev_nvme_get_path_iostat", 00:06:14.430 "bdev_nvme_get_mdns_discovery_info", 00:06:14.430 "bdev_nvme_stop_mdns_discovery", 00:06:14.430 "bdev_nvme_start_mdns_discovery", 00:06:14.430 "bdev_nvme_set_multipath_policy", 00:06:14.430 "bdev_nvme_set_preferred_path", 00:06:14.430 "bdev_nvme_get_io_paths", 00:06:14.430 "bdev_nvme_remove_error_injection", 00:06:14.430 "bdev_nvme_add_error_injection", 00:06:14.430 "bdev_nvme_get_discovery_info", 00:06:14.430 "bdev_nvme_stop_discovery", 00:06:14.430 "bdev_nvme_start_discovery", 00:06:14.430 "bdev_nvme_get_controller_health_info", 00:06:14.430 "bdev_nvme_disable_controller", 00:06:14.430 "bdev_nvme_enable_controller", 00:06:14.430 "bdev_nvme_reset_controller", 00:06:14.430 "bdev_nvme_get_transport_statistics", 00:06:14.430 "bdev_nvme_apply_firmware", 00:06:14.430 "bdev_nvme_detach_controller", 00:06:14.430 "bdev_nvme_get_controllers", 00:06:14.430 "bdev_nvme_attach_controller", 00:06:14.430 "bdev_nvme_set_hotplug", 00:06:14.430 "bdev_nvme_set_options", 00:06:14.430 "bdev_passthru_delete", 00:06:14.430 "bdev_passthru_create", 00:06:14.430 "bdev_lvol_set_parent_bdev", 00:06:14.430 "bdev_lvol_set_parent", 00:06:14.430 "bdev_lvol_check_shallow_copy", 00:06:14.430 "bdev_lvol_start_shallow_copy", 00:06:14.430 "bdev_lvol_grow_lvstore", 00:06:14.430 "bdev_lvol_get_lvols", 00:06:14.430 "bdev_lvol_get_lvstores", 00:06:14.430 "bdev_lvol_delete", 00:06:14.430 "bdev_lvol_set_read_only", 00:06:14.430 "bdev_lvol_resize", 00:06:14.430 "bdev_lvol_decouple_parent", 00:06:14.430 "bdev_lvol_inflate", 00:06:14.430 "bdev_lvol_rename", 00:06:14.430 "bdev_lvol_clone_bdev", 00:06:14.430 "bdev_lvol_clone", 00:06:14.430 "bdev_lvol_snapshot", 00:06:14.430 "bdev_lvol_create", 00:06:14.430 "bdev_lvol_delete_lvstore", 00:06:14.430 "bdev_lvol_rename_lvstore", 00:06:14.430 "bdev_lvol_create_lvstore", 00:06:14.430 "bdev_raid_set_options", 00:06:14.430 "bdev_raid_remove_base_bdev", 00:06:14.430 "bdev_raid_add_base_bdev", 00:06:14.430 "bdev_raid_delete", 00:06:14.430 "bdev_raid_create", 00:06:14.430 "bdev_raid_get_bdevs", 00:06:14.430 "bdev_error_inject_error", 00:06:14.430 "bdev_error_delete", 00:06:14.430 "bdev_error_create", 00:06:14.430 "bdev_split_delete", 00:06:14.430 "bdev_split_create", 00:06:14.430 "bdev_delay_delete", 00:06:14.430 "bdev_delay_create", 00:06:14.430 "bdev_delay_update_latency", 00:06:14.430 "bdev_zone_block_delete", 00:06:14.430 "bdev_zone_block_create", 00:06:14.430 "blobfs_create", 00:06:14.430 "blobfs_detect", 00:06:14.430 "blobfs_set_cache_size", 00:06:14.430 "bdev_aio_delete", 00:06:14.430 "bdev_aio_rescan", 00:06:14.430 "bdev_aio_create", 00:06:14.430 "bdev_ftl_set_property", 00:06:14.430 "bdev_ftl_get_properties", 00:06:14.430 "bdev_ftl_get_stats", 00:06:14.430 "bdev_ftl_unmap", 00:06:14.430 "bdev_ftl_unload", 00:06:14.430 "bdev_ftl_delete", 00:06:14.430 "bdev_ftl_load", 00:06:14.430 "bdev_ftl_create", 00:06:14.430 "bdev_virtio_attach_controller", 00:06:14.430 "bdev_virtio_scsi_get_devices", 00:06:14.430 "bdev_virtio_detach_controller", 00:06:14.430 "bdev_virtio_blk_set_hotplug", 00:06:14.430 "bdev_iscsi_delete", 00:06:14.430 "bdev_iscsi_create", 00:06:14.430 "bdev_iscsi_set_options", 00:06:14.430 "accel_error_inject_error", 00:06:14.430 "ioat_scan_accel_module", 00:06:14.430 "dsa_scan_accel_module", 00:06:14.430 "iaa_scan_accel_module", 00:06:14.430 "vfu_virtio_create_scsi_endpoint", 00:06:14.430 "vfu_virtio_scsi_remove_target", 00:06:14.430 "vfu_virtio_scsi_add_target", 00:06:14.430 "vfu_virtio_create_blk_endpoint", 00:06:14.430 "vfu_virtio_delete_endpoint", 00:06:14.430 "keyring_file_remove_key", 00:06:14.430 "keyring_file_add_key", 00:06:14.430 "keyring_linux_set_options", 00:06:14.430 "iscsi_get_histogram", 00:06:14.430 "iscsi_enable_histogram", 00:06:14.430 "iscsi_set_options", 00:06:14.430 "iscsi_get_auth_groups", 00:06:14.430 "iscsi_auth_group_remove_secret", 00:06:14.430 "iscsi_auth_group_add_secret", 00:06:14.430 "iscsi_delete_auth_group", 00:06:14.430 "iscsi_create_auth_group", 00:06:14.430 "iscsi_set_discovery_auth", 00:06:14.430 "iscsi_get_options", 00:06:14.430 "iscsi_target_node_request_logout", 00:06:14.430 "iscsi_target_node_set_redirect", 00:06:14.430 "iscsi_target_node_set_auth", 00:06:14.430 "iscsi_target_node_add_lun", 00:06:14.430 "iscsi_get_stats", 00:06:14.430 "iscsi_get_connections", 00:06:14.430 "iscsi_portal_group_set_auth", 00:06:14.430 "iscsi_start_portal_group", 00:06:14.430 "iscsi_delete_portal_group", 00:06:14.430 "iscsi_create_portal_group", 00:06:14.430 "iscsi_get_portal_groups", 00:06:14.430 "iscsi_delete_target_node", 00:06:14.430 "iscsi_target_node_remove_pg_ig_maps", 00:06:14.430 "iscsi_target_node_add_pg_ig_maps", 00:06:14.430 "iscsi_create_target_node", 00:06:14.430 "iscsi_get_target_nodes", 00:06:14.430 "iscsi_delete_initiator_group", 00:06:14.430 "iscsi_initiator_group_remove_initiators", 00:06:14.430 "iscsi_initiator_group_add_initiators", 00:06:14.430 "iscsi_create_initiator_group", 00:06:14.430 "iscsi_get_initiator_groups", 00:06:14.430 "nvmf_set_crdt", 00:06:14.430 "nvmf_set_config", 00:06:14.430 "nvmf_set_max_subsystems", 00:06:14.430 "nvmf_stop_mdns_prr", 00:06:14.430 "nvmf_publish_mdns_prr", 00:06:14.430 "nvmf_subsystem_get_listeners", 00:06:14.430 "nvmf_subsystem_get_qpairs", 00:06:14.430 "nvmf_subsystem_get_controllers", 00:06:14.430 "nvmf_get_stats", 00:06:14.430 "nvmf_get_transports", 00:06:14.430 "nvmf_create_transport", 00:06:14.430 "nvmf_get_targets", 00:06:14.430 "nvmf_delete_target", 00:06:14.430 "nvmf_create_target", 00:06:14.430 "nvmf_subsystem_allow_any_host", 00:06:14.430 "nvmf_subsystem_remove_host", 00:06:14.430 "nvmf_subsystem_add_host", 00:06:14.430 "nvmf_ns_remove_host", 00:06:14.430 "nvmf_ns_add_host", 00:06:14.430 "nvmf_subsystem_remove_ns", 00:06:14.430 "nvmf_subsystem_add_ns", 00:06:14.430 "nvmf_subsystem_listener_set_ana_state", 00:06:14.430 "nvmf_discovery_get_referrals", 00:06:14.430 "nvmf_discovery_remove_referral", 00:06:14.430 "nvmf_discovery_add_referral", 00:06:14.430 "nvmf_subsystem_remove_listener", 00:06:14.430 "nvmf_subsystem_add_listener", 00:06:14.430 "nvmf_delete_subsystem", 00:06:14.430 "nvmf_create_subsystem", 00:06:14.430 "nvmf_get_subsystems", 00:06:14.430 "env_dpdk_get_mem_stats", 00:06:14.430 "nbd_get_disks", 00:06:14.430 "nbd_stop_disk", 00:06:14.430 "nbd_start_disk", 00:06:14.430 "ublk_recover_disk", 00:06:14.430 "ublk_get_disks", 00:06:14.430 "ublk_stop_disk", 00:06:14.430 "ublk_start_disk", 00:06:14.430 "ublk_destroy_target", 00:06:14.430 "ublk_create_target", 00:06:14.430 "virtio_blk_create_transport", 00:06:14.430 "virtio_blk_get_transports", 00:06:14.430 "vhost_controller_set_coalescing", 00:06:14.430 "vhost_get_controllers", 00:06:14.430 "vhost_delete_controller", 00:06:14.430 "vhost_create_blk_controller", 00:06:14.430 "vhost_scsi_controller_remove_target", 00:06:14.430 "vhost_scsi_controller_add_target", 00:06:14.430 "vhost_start_scsi_controller", 00:06:14.430 "vhost_create_scsi_controller", 00:06:14.430 "thread_set_cpumask", 00:06:14.430 "framework_get_governor", 00:06:14.430 "framework_get_scheduler", 00:06:14.430 "framework_set_scheduler", 00:06:14.430 "framework_get_reactors", 00:06:14.430 "thread_get_io_channels", 00:06:14.430 "thread_get_pollers", 00:06:14.430 "thread_get_stats", 00:06:14.430 "framework_monitor_context_switch", 00:06:14.430 "spdk_kill_instance", 00:06:14.430 "log_enable_timestamps", 00:06:14.430 "log_get_flags", 00:06:14.430 "log_clear_flag", 00:06:14.430 "log_set_flag", 00:06:14.430 "log_get_level", 00:06:14.430 "log_set_level", 00:06:14.430 "log_get_print_level", 00:06:14.431 "log_set_print_level", 00:06:14.431 "framework_enable_cpumask_locks", 00:06:14.431 "framework_disable_cpumask_locks", 00:06:14.431 "framework_wait_init", 00:06:14.431 "framework_start_init", 00:06:14.431 "scsi_get_devices", 00:06:14.431 "bdev_get_histogram", 00:06:14.431 "bdev_enable_histogram", 00:06:14.431 "bdev_set_qos_limit", 00:06:14.431 "bdev_set_qd_sampling_period", 00:06:14.431 "bdev_get_bdevs", 00:06:14.431 "bdev_reset_iostat", 00:06:14.431 "bdev_get_iostat", 00:06:14.431 "bdev_examine", 00:06:14.431 "bdev_wait_for_examine", 00:06:14.431 "bdev_set_options", 00:06:14.431 "notify_get_notifications", 00:06:14.431 "notify_get_types", 00:06:14.431 "accel_get_stats", 00:06:14.431 "accel_set_options", 00:06:14.431 "accel_set_driver", 00:06:14.431 "accel_crypto_key_destroy", 00:06:14.431 "accel_crypto_keys_get", 00:06:14.431 "accel_crypto_key_create", 00:06:14.431 "accel_assign_opc", 00:06:14.431 "accel_get_module_info", 00:06:14.431 "accel_get_opc_assignments", 00:06:14.431 "vmd_rescan", 00:06:14.431 "vmd_remove_device", 00:06:14.431 "vmd_enable", 00:06:14.431 "sock_get_default_impl", 00:06:14.431 "sock_set_default_impl", 00:06:14.431 "sock_impl_set_options", 00:06:14.431 "sock_impl_get_options", 00:06:14.431 "iobuf_get_stats", 00:06:14.431 "iobuf_set_options", 00:06:14.431 "keyring_get_keys", 00:06:14.431 "framework_get_pci_devices", 00:06:14.431 "framework_get_config", 00:06:14.431 "framework_get_subsystems", 00:06:14.431 "vfu_tgt_set_base_path", 00:06:14.431 "trace_get_info", 00:06:14.431 "trace_get_tpoint_group_mask", 00:06:14.431 "trace_disable_tpoint_group", 00:06:14.431 "trace_enable_tpoint_group", 00:06:14.431 "trace_clear_tpoint_mask", 00:06:14.431 "trace_set_tpoint_mask", 00:06:14.431 "spdk_get_version", 00:06:14.431 "rpc_get_methods" 00:06:14.431 ] 00:06:14.431 10:21:17 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:14.431 10:21:17 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:14.431 10:21:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.431 10:21:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:14.431 10:21:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3710059 00:06:14.431 10:21:18 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3710059 ']' 00:06:14.431 10:21:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3710059 00:06:14.431 10:21:18 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:14.431 10:21:18 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.431 10:21:18 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3710059 00:06:14.431 10:21:18 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.431 10:21:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.431 10:21:18 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3710059' 00:06:14.431 killing process with pid 3710059 00:06:14.431 10:21:18 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3710059 00:06:14.431 10:21:18 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3710059 00:06:14.691 00:06:14.691 real 0m1.543s 00:06:14.691 user 0m2.801s 00:06:14.691 sys 0m0.503s 00:06:14.691 10:21:18 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.691 10:21:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.691 ************************************ 00:06:14.691 END TEST spdkcli_tcp 00:06:14.691 ************************************ 00:06:14.951 10:21:18 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:14.951 10:21:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.951 10:21:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.951 10:21:18 -- common/autotest_common.sh@10 -- # set +x 00:06:14.951 ************************************ 00:06:14.951 START TEST dpdk_mem_utility 00:06:14.951 ************************************ 00:06:14.951 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:14.951 * Looking for test storage... 00:06:14.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:14.951 10:21:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:14.951 10:21:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3710394 00:06:14.951 10:21:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3710394 00:06:14.951 10:21:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.951 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3710394 ']' 00:06:14.951 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.951 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.951 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.951 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.951 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.951 [2024-07-25 10:21:18.603173] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:14.951 [2024-07-25 10:21:18.603221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710394 ] 00:06:14.951 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.210 [2024-07-25 10:21:18.672957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.210 [2024-07-25 10:21:18.742333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.778 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.778 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:15.778 10:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:15.778 10:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:15.778 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.778 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:15.778 { 00:06:15.778 "filename": "/tmp/spdk_mem_dump.txt" 00:06:15.778 } 00:06:15.778 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.778 10:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:15.778 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:15.778 1 heaps totaling size 814.000000 MiB 00:06:15.778 size: 814.000000 MiB heap id: 0 00:06:15.778 end heaps---------- 00:06:15.778 8 mempools totaling size 598.116089 MiB 00:06:15.778 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:15.778 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:15.778 size: 84.521057 MiB name: bdev_io_3710394 00:06:15.778 size: 51.011292 MiB name: evtpool_3710394 00:06:15.778 size: 50.003479 MiB name: msgpool_3710394 00:06:15.778 size: 21.763794 MiB name: PDU_Pool 00:06:15.778 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:15.778 size: 0.026123 MiB name: Session_Pool 00:06:15.778 end mempools------- 00:06:15.778 6 memzones totaling size 4.142822 MiB 00:06:15.778 size: 1.000366 MiB name: RG_ring_0_3710394 00:06:15.778 size: 1.000366 MiB name: RG_ring_1_3710394 00:06:15.778 size: 1.000366 MiB name: RG_ring_4_3710394 00:06:15.778 size: 1.000366 MiB name: RG_ring_5_3710394 00:06:15.778 size: 0.125366 MiB name: RG_ring_2_3710394 00:06:15.778 size: 0.015991 MiB name: RG_ring_3_3710394 00:06:15.778 end memzones------- 00:06:15.778 10:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:16.038 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:16.038 list of free elements. size: 12.519348 MiB 00:06:16.038 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:16.038 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:16.038 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:16.038 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:16.038 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:16.038 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:16.038 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:16.038 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:16.039 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:16.039 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:16.039 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:16.039 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:16.039 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:16.039 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:16.039 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:16.039 list of standard malloc elements. size: 199.218079 MiB 00:06:16.039 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:16.039 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:16.039 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:16.039 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:16.039 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:16.039 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:16.039 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:16.039 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:16.039 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:16.039 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:16.039 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:16.039 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:16.039 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:16.039 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:16.039 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:16.039 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:16.039 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:16.039 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:16.039 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:16.039 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:16.039 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:16.039 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:16.039 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:16.039 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:16.039 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:16.039 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:16.039 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:16.039 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:16.039 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:16.039 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:16.039 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:16.039 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:16.039 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:16.039 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:16.039 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:16.039 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:16.039 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:16.039 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:16.039 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:16.039 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:16.039 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:16.039 list of memzone associated elements. size: 602.262573 MiB 00:06:16.039 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:16.039 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:16.039 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:16.039 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:16.039 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:16.039 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3710394_0 00:06:16.039 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:16.039 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3710394_0 00:06:16.039 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:16.039 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3710394_0 00:06:16.039 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:16.039 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:16.039 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:16.039 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:16.039 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:16.039 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3710394 00:06:16.039 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:16.039 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3710394 00:06:16.039 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:16.039 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3710394 00:06:16.039 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:16.039 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:16.039 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:16.039 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:16.039 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:16.039 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:16.039 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:16.039 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:16.039 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:16.039 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3710394 00:06:16.039 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:16.039 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3710394 00:06:16.039 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:16.039 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3710394 00:06:16.039 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:16.039 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3710394 00:06:16.039 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:16.039 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3710394 00:06:16.039 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:16.039 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:16.039 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:16.039 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:16.039 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:16.039 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:16.039 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:16.039 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3710394 00:06:16.039 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:16.039 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:16.039 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:16.039 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:16.039 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:16.039 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3710394 00:06:16.039 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:16.039 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:16.039 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:16.039 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3710394 00:06:16.039 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:16.039 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3710394 00:06:16.039 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:16.039 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:16.039 10:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:16.039 10:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3710394 00:06:16.039 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3710394 ']' 00:06:16.039 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3710394 00:06:16.039 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:16.039 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.039 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3710394 00:06:16.039 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.039 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.039 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3710394' 00:06:16.039 killing process with pid 3710394 00:06:16.039 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3710394 00:06:16.039 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3710394 00:06:16.300 00:06:16.300 real 0m1.396s 00:06:16.300 user 0m1.444s 00:06:16.300 sys 0m0.432s 00:06:16.300 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.300 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.300 ************************************ 00:06:16.300 END TEST dpdk_mem_utility 00:06:16.300 ************************************ 00:06:16.300 10:21:19 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:16.300 10:21:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.300 10:21:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.300 10:21:19 -- common/autotest_common.sh@10 -- # set +x 00:06:16.300 ************************************ 00:06:16.300 START TEST event 00:06:16.300 ************************************ 00:06:16.300 10:21:19 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:16.560 * Looking for test storage... 00:06:16.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:16.560 10:21:20 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:16.560 10:21:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:16.560 10:21:20 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:16.560 10:21:20 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:16.560 10:21:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.560 10:21:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.560 ************************************ 00:06:16.560 START TEST event_perf 00:06:16.560 ************************************ 00:06:16.560 10:21:20 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:16.560 Running I/O for 1 seconds...[2024-07-25 10:21:20.117871] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:16.560 [2024-07-25 10:21:20.117960] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710720 ] 00:06:16.560 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.560 [2024-07-25 10:21:20.189929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.560 [2024-07-25 10:21:20.261509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.560 [2024-07-25 10:21:20.261593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.560 [2024-07-25 10:21:20.261688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.560 [2024-07-25 10:21:20.261689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.939 Running I/O for 1 seconds... 00:06:17.939 lcore 0: 216629 00:06:17.939 lcore 1: 216629 00:06:17.939 lcore 2: 216629 00:06:17.939 lcore 3: 216630 00:06:17.939 done. 00:06:17.939 00:06:17.939 real 0m1.236s 00:06:17.939 user 0m4.139s 00:06:17.939 sys 0m0.095s 00:06:17.939 10:21:21 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.939 10:21:21 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.939 ************************************ 00:06:17.939 END TEST event_perf 00:06:17.939 ************************************ 00:06:17.939 10:21:21 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:17.939 10:21:21 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:17.939 10:21:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.939 10:21:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.939 ************************************ 00:06:17.939 START TEST event_reactor 00:06:17.939 ************************************ 00:06:17.939 10:21:21 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:17.939 [2024-07-25 10:21:21.431805] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:17.939 [2024-07-25 10:21:21.431875] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711001 ] 00:06:17.939 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.939 [2024-07-25 10:21:21.504445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.939 [2024-07-25 10:21:21.570011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.317 test_start 00:06:19.317 oneshot 00:06:19.317 tick 100 00:06:19.317 tick 100 00:06:19.317 tick 250 00:06:19.317 tick 100 00:06:19.317 tick 100 00:06:19.317 tick 250 00:06:19.317 tick 100 00:06:19.317 tick 500 00:06:19.317 tick 100 00:06:19.317 tick 100 00:06:19.317 tick 250 00:06:19.317 tick 100 00:06:19.317 tick 100 00:06:19.317 test_end 00:06:19.317 00:06:19.317 real 0m1.222s 00:06:19.317 user 0m1.134s 00:06:19.317 sys 0m0.085s 00:06:19.317 10:21:22 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.317 10:21:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:19.317 ************************************ 00:06:19.317 END TEST event_reactor 00:06:19.317 ************************************ 00:06:19.317 10:21:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.317 10:21:22 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:19.317 10:21:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.317 10:21:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.317 ************************************ 00:06:19.317 START TEST event_reactor_perf 00:06:19.317 ************************************ 00:06:19.317 10:21:22 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.317 [2024-07-25 10:21:22.739630] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:19.317 [2024-07-25 10:21:22.739711] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711234 ] 00:06:19.317 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.317 [2024-07-25 10:21:22.813207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.317 [2024-07-25 10:21:22.880551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.252 test_start 00:06:20.252 test_end 00:06:20.252 Performance: 534449 events per second 00:06:20.252 00:06:20.252 real 0m1.231s 00:06:20.252 user 0m1.130s 00:06:20.252 sys 0m0.098s 00:06:20.252 10:21:23 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.252 10:21:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.252 ************************************ 00:06:20.252 END TEST event_reactor_perf 00:06:20.252 ************************************ 00:06:20.512 10:21:23 event -- event/event.sh@49 -- # uname -s 00:06:20.512 10:21:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:20.512 10:21:23 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:20.512 10:21:23 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.512 10:21:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.512 10:21:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.512 ************************************ 00:06:20.512 START TEST event_scheduler 00:06:20.512 ************************************ 00:06:20.512 10:21:24 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:20.512 * Looking for test storage... 00:06:20.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:20.512 10:21:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:20.512 10:21:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3711479 00:06:20.512 10:21:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.512 10:21:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:20.512 10:21:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3711479 00:06:20.512 10:21:24 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3711479 ']' 00:06:20.512 10:21:24 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.512 10:21:24 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.512 10:21:24 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.512 10:21:24 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.512 10:21:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.512 [2024-07-25 10:21:24.183724] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:20.512 [2024-07-25 10:21:24.183778] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711479 ] 00:06:20.512 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.772 [2024-07-25 10:21:24.252866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.772 [2024-07-25 10:21:24.330746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.772 [2024-07-25 10:21:24.330836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.772 [2024-07-25 10:21:24.330924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.772 [2024-07-25 10:21:24.330926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.341 10:21:24 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.341 10:21:24 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:21.341 10:21:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:21.341 10:21:24 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.341 10:21:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.341 [2024-07-25 10:21:25.001287] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:21.341 [2024-07-25 10:21:25.001307] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:21.341 [2024-07-25 10:21:25.001318] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:21.341 [2024-07-25 10:21:25.001326] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:21.341 [2024-07-25 10:21:25.001333] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:21.341 10:21:25 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.341 10:21:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:21.341 10:21:25 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.341 10:21:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.600 [2024-07-25 10:21:25.073554] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:21.600 10:21:25 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.600 10:21:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:21.600 10:21:25 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.600 10:21:25 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.600 10:21:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.600 ************************************ 00:06:21.600 START TEST scheduler_create_thread 00:06:21.600 ************************************ 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.600 2 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.600 3 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.600 4 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.600 5 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.600 6 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.600 7 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.600 8 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.600 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.601 9 00:06:21.601 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.601 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:21.601 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.601 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.601 10 00:06:21.601 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.601 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:21.601 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.601 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.601 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.601 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:21.601 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:21.601 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.601 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.168 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:22.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.545 10:21:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.545 10:21:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:23.545 10:21:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:23.545 10:21:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.545 10:21:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.530 10:21:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.530 00:06:24.530 real 0m3.102s 00:06:24.530 user 0m0.025s 00:06:24.530 sys 0m0.006s 00:06:24.530 10:21:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.530 10:21:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.530 ************************************ 00:06:24.530 END TEST scheduler_create_thread 00:06:24.530 ************************************ 00:06:24.789 10:21:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:24.789 10:21:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3711479 00:06:24.789 10:21:28 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3711479 ']' 00:06:24.789 10:21:28 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3711479 00:06:24.789 10:21:28 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:24.789 10:21:28 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.789 10:21:28 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3711479 00:06:24.789 10:21:28 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:24.789 10:21:28 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:24.789 10:21:28 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3711479' 00:06:24.789 killing process with pid 3711479 00:06:24.789 10:21:28 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3711479 00:06:24.789 10:21:28 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3711479 00:06:25.048 [2024-07-25 10:21:28.596637] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:25.307 00:06:25.307 real 0m4.770s 00:06:25.307 user 0m9.225s 00:06:25.307 sys 0m0.433s 00:06:25.307 10:21:28 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.307 10:21:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.307 ************************************ 00:06:25.307 END TEST event_scheduler 00:06:25.307 ************************************ 00:06:25.307 10:21:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:25.307 10:21:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:25.307 10:21:28 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.307 10:21:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.307 10:21:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.307 ************************************ 00:06:25.307 START TEST app_repeat 00:06:25.307 ************************************ 00:06:25.307 10:21:28 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:25.307 10:21:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.307 10:21:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.307 10:21:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:25.307 10:21:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.307 10:21:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:25.307 10:21:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:25.307 10:21:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:25.307 10:21:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3712382 00:06:25.307 10:21:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.307 10:21:28 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:25.307 10:21:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3712382' 00:06:25.307 Process app_repeat pid: 3712382 00:06:25.307 10:21:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.307 10:21:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:25.307 spdk_app_start Round 0 00:06:25.307 10:21:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3712382 /var/tmp/spdk-nbd.sock 00:06:25.307 10:21:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3712382 ']' 00:06:25.307 10:21:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.307 10:21:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.307 10:21:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.307 10:21:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.307 10:21:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.307 [2024-07-25 10:21:28.937595] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:25.307 [2024-07-25 10:21:28.937655] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3712382 ] 00:06:25.307 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.307 [2024-07-25 10:21:29.009352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.567 [2024-07-25 10:21:29.084669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.567 [2024-07-25 10:21:29.084672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.135 10:21:29 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.135 10:21:29 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:26.135 10:21:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.394 Malloc0 00:06:26.394 10:21:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.653 Malloc1 00:06:26.653 10:21:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.653 /dev/nbd0 00:06:26.653 10:21:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.654 10:21:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.654 10:21:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:26.654 10:21:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:26.654 10:21:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:26.654 10:21:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:26.654 10:21:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:26.654 10:21:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:26.654 10:21:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.654 10:21:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.654 10:21:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.654 1+0 records in 00:06:26.654 1+0 records out 00:06:26.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257141 s, 15.9 MB/s 00:06:26.654 10:21:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.654 10:21:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:26.654 10:21:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.654 10:21:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.654 10:21:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:26.654 10:21:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.654 10:21:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.654 10:21:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.913 /dev/nbd1 00:06:26.913 10:21:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.913 10:21:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.913 10:21:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:26.913 10:21:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:26.913 10:21:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:26.913 10:21:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:26.913 10:21:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:26.913 10:21:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:26.913 10:21:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.913 10:21:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.913 10:21:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.913 1+0 records in 00:06:26.913 1+0 records out 00:06:26.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234202 s, 17.5 MB/s 00:06:26.913 10:21:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.913 10:21:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:26.913 10:21:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.913 10:21:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.913 10:21:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:26.913 10:21:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.913 10:21:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.913 10:21:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.913 10:21:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.913 10:21:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.172 { 00:06:27.172 "nbd_device": "/dev/nbd0", 00:06:27.172 "bdev_name": "Malloc0" 00:06:27.172 }, 00:06:27.172 { 00:06:27.172 "nbd_device": "/dev/nbd1", 00:06:27.172 "bdev_name": "Malloc1" 00:06:27.172 } 00:06:27.172 ]' 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.172 { 00:06:27.172 "nbd_device": "/dev/nbd0", 00:06:27.172 "bdev_name": "Malloc0" 00:06:27.172 }, 00:06:27.172 { 00:06:27.172 "nbd_device": "/dev/nbd1", 00:06:27.172 "bdev_name": "Malloc1" 00:06:27.172 } 00:06:27.172 ]' 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.172 /dev/nbd1' 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.172 /dev/nbd1' 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.172 256+0 records in 00:06:27.172 256+0 records out 00:06:27.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113999 s, 92.0 MB/s 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.172 256+0 records in 00:06:27.172 256+0 records out 00:06:27.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133795 s, 78.4 MB/s 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.172 256+0 records in 00:06:27.172 256+0 records out 00:06:27.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212329 s, 49.4 MB/s 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.172 10:21:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.431 10:21:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.431 10:21:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.431 10:21:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.431 10:21:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.431 10:21:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.431 10:21:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.431 10:21:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.431 10:21:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.431 10:21:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.431 10:21:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.431 10:21:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.431 10:21:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.431 10:21:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.431 10:21:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.431 10:21:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.431 10:21:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.689 10:21:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.689 10:21:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.689 10:21:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.689 10:21:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.689 10:21:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.689 10:21:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.689 10:21:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.689 10:21:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.689 10:21:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.689 10:21:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.689 10:21:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.948 10:21:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.948 10:21:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.948 10:21:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.948 10:21:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.948 10:21:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.948 10:21:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.948 10:21:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.948 10:21:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.948 10:21:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.948 10:21:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.948 10:21:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.948 10:21:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.948 10:21:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.207 10:21:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.207 [2024-07-25 10:21:31.846144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.207 [2024-07-25 10:21:31.909436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.207 [2024-07-25 10:21:31.909439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.466 [2024-07-25 10:21:31.949954] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.466 [2024-07-25 10:21:31.949996] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.002 10:21:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.002 10:21:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:31.002 spdk_app_start Round 1 00:06:31.002 10:21:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3712382 /var/tmp/spdk-nbd.sock 00:06:31.002 10:21:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3712382 ']' 00:06:31.002 10:21:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.002 10:21:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.002 10:21:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.002 10:21:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.002 10:21:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.262 10:21:34 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.262 10:21:34 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:31.262 10:21:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.521 Malloc0 00:06:31.521 10:21:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.521 Malloc1 00:06:31.521 10:21:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.521 10:21:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.521 10:21:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.521 10:21:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.521 10:21:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.521 10:21:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.521 10:21:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.521 10:21:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.521 10:21:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.521 10:21:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.521 10:21:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.521 10:21:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.521 10:21:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.521 10:21:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.521 10:21:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.522 10:21:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.781 /dev/nbd0 00:06:31.781 10:21:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.781 10:21:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.781 10:21:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:31.781 10:21:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:31.781 10:21:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.781 10:21:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.781 10:21:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:31.781 10:21:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:31.781 10:21:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:31.781 10:21:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:31.781 10:21:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.781 1+0 records in 00:06:31.781 1+0 records out 00:06:31.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023667 s, 17.3 MB/s 00:06:31.781 10:21:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.781 10:21:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:31.781 10:21:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.781 10:21:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.781 10:21:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:31.781 10:21:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.781 10:21:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.781 10:21:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.041 /dev/nbd1 00:06:32.041 10:21:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.041 10:21:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.041 10:21:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:32.041 10:21:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:32.041 10:21:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.041 10:21:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.041 10:21:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:32.041 10:21:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:32.041 10:21:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.041 10:21:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.041 10:21:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.041 1+0 records in 00:06:32.041 1+0 records out 00:06:32.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232584 s, 17.6 MB/s 00:06:32.041 10:21:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.041 10:21:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:32.041 10:21:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.041 10:21:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.041 10:21:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:32.041 10:21:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.041 10:21:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.041 10:21:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.041 10:21:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.041 10:21:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.300 { 00:06:32.300 "nbd_device": "/dev/nbd0", 00:06:32.300 "bdev_name": "Malloc0" 00:06:32.300 }, 00:06:32.300 { 00:06:32.300 "nbd_device": "/dev/nbd1", 00:06:32.300 "bdev_name": "Malloc1" 00:06:32.300 } 00:06:32.300 ]' 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.300 { 00:06:32.300 "nbd_device": "/dev/nbd0", 00:06:32.300 "bdev_name": "Malloc0" 00:06:32.300 }, 00:06:32.300 { 00:06:32.300 "nbd_device": "/dev/nbd1", 00:06:32.300 "bdev_name": "Malloc1" 00:06:32.300 } 00:06:32.300 ]' 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.300 /dev/nbd1' 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.300 /dev/nbd1' 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.300 256+0 records in 00:06:32.300 256+0 records out 00:06:32.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010967 s, 95.6 MB/s 00:06:32.300 10:21:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.301 256+0 records in 00:06:32.301 256+0 records out 00:06:32.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196467 s, 53.4 MB/s 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.301 256+0 records in 00:06:32.301 256+0 records out 00:06:32.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147739 s, 71.0 MB/s 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.301 10:21:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.560 10:21:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.560 10:21:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.560 10:21:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.560 10:21:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.560 10:21:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.560 10:21:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.560 10:21:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.560 10:21:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.560 10:21:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.560 10:21:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.820 10:21:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.820 10:21:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.820 10:21:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.820 10:21:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.820 10:21:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.820 10:21:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.820 10:21:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.820 10:21:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.820 10:21:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.820 10:21:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.820 10:21:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.820 10:21:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.820 10:21:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.820 10:21:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.080 10:21:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.080 10:21:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.080 10:21:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.080 10:21:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.080 10:21:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.080 10:21:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.080 10:21:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.080 10:21:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.080 10:21:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.080 10:21:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.080 10:21:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.339 [2024-07-25 10:21:36.910268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.339 [2024-07-25 10:21:36.972392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.339 [2024-07-25 10:21:36.972395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.339 [2024-07-25 10:21:37.013879] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.339 [2024-07-25 10:21:37.013921] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.629 10:21:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:36.629 10:21:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:36.629 spdk_app_start Round 2 00:06:36.629 10:21:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3712382 /var/tmp/spdk-nbd.sock 00:06:36.629 10:21:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3712382 ']' 00:06:36.629 10:21:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.629 10:21:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.629 10:21:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.629 10:21:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.629 10:21:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.629 10:21:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.629 10:21:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:36.629 10:21:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.629 Malloc0 00:06:36.629 10:21:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.629 Malloc1 00:06:36.629 10:21:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.629 10:21:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.629 10:21:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.629 10:21:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.629 10:21:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.629 10:21:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.629 10:21:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.629 10:21:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.629 10:21:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.629 10:21:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.629 10:21:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.629 10:21:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.629 10:21:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.629 10:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.629 10:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.629 10:21:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.888 /dev/nbd0 00:06:36.888 10:21:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.888 10:21:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.888 10:21:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:36.888 10:21:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:36.888 10:21:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.888 10:21:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.889 10:21:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:36.889 10:21:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:36.889 10:21:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.889 10:21:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.889 10:21:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.889 1+0 records in 00:06:36.889 1+0 records out 00:06:36.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252152 s, 16.2 MB/s 00:06:36.889 10:21:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.889 10:21:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:36.889 10:21:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.889 10:21:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.889 10:21:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:36.889 10:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.889 10:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.889 10:21:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.148 /dev/nbd1 00:06:37.148 10:21:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.148 10:21:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.148 10:21:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:37.148 10:21:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:37.148 10:21:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:37.148 10:21:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:37.148 10:21:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:37.148 10:21:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:37.148 10:21:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:37.148 10:21:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:37.148 10:21:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.148 1+0 records in 00:06:37.148 1+0 records out 00:06:37.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227411 s, 18.0 MB/s 00:06:37.148 10:21:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.148 10:21:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:37.148 10:21:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.148 10:21:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:37.148 10:21:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:37.148 10:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.148 10:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.148 10:21:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.148 10:21:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.148 10:21:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.408 { 00:06:37.408 "nbd_device": "/dev/nbd0", 00:06:37.408 "bdev_name": "Malloc0" 00:06:37.408 }, 00:06:37.408 { 00:06:37.408 "nbd_device": "/dev/nbd1", 00:06:37.408 "bdev_name": "Malloc1" 00:06:37.408 } 00:06:37.408 ]' 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.408 { 00:06:37.408 "nbd_device": "/dev/nbd0", 00:06:37.408 "bdev_name": "Malloc0" 00:06:37.408 }, 00:06:37.408 { 00:06:37.408 "nbd_device": "/dev/nbd1", 00:06:37.408 "bdev_name": "Malloc1" 00:06:37.408 } 00:06:37.408 ]' 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.408 /dev/nbd1' 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.408 /dev/nbd1' 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.408 256+0 records in 00:06:37.408 256+0 records out 00:06:37.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106813 s, 98.2 MB/s 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.408 256+0 records in 00:06:37.408 256+0 records out 00:06:37.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185878 s, 56.4 MB/s 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.408 256+0 records in 00:06:37.408 256+0 records out 00:06:37.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208645 s, 50.3 MB/s 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.408 10:21:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.408 10:21:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.408 10:21:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.408 10:21:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.408 10:21:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.408 10:21:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.408 10:21:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.408 10:21:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.408 10:21:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.668 10:21:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.668 10:21:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.668 10:21:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.668 10:21:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.668 10:21:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.668 10:21:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.668 10:21:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.668 10:21:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.668 10:21:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.668 10:21:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.927 10:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:38.187 10:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.187 10:21:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.187 10:21:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.187 10:21:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.187 10:21:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.187 10:21:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.187 10:21:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:38.447 [2024-07-25 10:21:42.001258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.447 [2024-07-25 10:21:42.064019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.447 [2024-07-25 10:21:42.064022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.447 [2024-07-25 10:21:42.104475] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:38.447 [2024-07-25 10:21:42.104515] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.780 10:21:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3712382 /var/tmp/spdk-nbd.sock 00:06:41.780 10:21:44 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3712382 ']' 00:06:41.780 10:21:44 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.780 10:21:44 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.780 10:21:44 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.780 10:21:44 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.780 10:21:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.780 10:21:44 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.780 10:21:44 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:41.780 10:21:44 event.app_repeat -- event/event.sh@39 -- # killprocess 3712382 00:06:41.780 10:21:44 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3712382 ']' 00:06:41.780 10:21:44 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3712382 00:06:41.780 10:21:44 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:41.780 10:21:44 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.780 10:21:44 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3712382 00:06:41.780 10:21:45 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.780 10:21:45 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.780 10:21:45 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3712382' 00:06:41.780 killing process with pid 3712382 00:06:41.780 10:21:45 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3712382 00:06:41.780 10:21:45 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3712382 00:06:41.780 spdk_app_start is called in Round 0. 00:06:41.780 Shutdown signal received, stop current app iteration 00:06:41.780 Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 reinitialization... 00:06:41.780 spdk_app_start is called in Round 1. 00:06:41.780 Shutdown signal received, stop current app iteration 00:06:41.780 Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 reinitialization... 00:06:41.780 spdk_app_start is called in Round 2. 00:06:41.780 Shutdown signal received, stop current app iteration 00:06:41.780 Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 reinitialization... 00:06:41.780 spdk_app_start is called in Round 3. 00:06:41.780 Shutdown signal received, stop current app iteration 00:06:41.780 10:21:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:41.780 10:21:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:41.780 00:06:41.780 real 0m16.308s 00:06:41.780 user 0m34.682s 00:06:41.780 sys 0m3.030s 00:06:41.780 10:21:45 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.780 10:21:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.780 ************************************ 00:06:41.780 END TEST app_repeat 00:06:41.780 ************************************ 00:06:41.780 10:21:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:41.780 10:21:45 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:41.780 10:21:45 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.780 10:21:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.780 10:21:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.780 ************************************ 00:06:41.780 START TEST cpu_locks 00:06:41.780 ************************************ 00:06:41.780 10:21:45 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:41.780 * Looking for test storage... 00:06:41.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:41.780 10:21:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:41.780 10:21:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:41.780 10:21:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:41.780 10:21:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:41.780 10:21:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.780 10:21:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.780 10:21:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.780 ************************************ 00:06:41.780 START TEST default_locks 00:06:41.780 ************************************ 00:06:41.780 10:21:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:41.780 10:21:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3715328 00:06:41.780 10:21:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3715328 00:06:41.780 10:21:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.780 10:21:45 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3715328 ']' 00:06:41.780 10:21:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.780 10:21:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.780 10:21:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.780 10:21:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.780 10:21:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.039 [2024-07-25 10:21:45.493922] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:42.039 [2024-07-25 10:21:45.493966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715328 ] 00:06:42.039 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.039 [2024-07-25 10:21:45.561811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.040 [2024-07-25 10:21:45.631187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.607 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.607 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:42.607 10:21:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3715328 00:06:42.607 10:21:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3715328 00:06:42.607 10:21:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.174 lslocks: write error 00:06:43.174 10:21:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3715328 00:06:43.174 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3715328 ']' 00:06:43.174 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3715328 00:06:43.174 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:43.174 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.174 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3715328 00:06:43.174 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.174 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.174 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3715328' 00:06:43.174 killing process with pid 3715328 00:06:43.174 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3715328 00:06:43.174 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3715328 00:06:43.433 10:21:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3715328 00:06:43.433 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:43.433 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3715328 00:06:43.433 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:43.433 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.433 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:43.433 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.433 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3715328 00:06:43.433 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3715328 ']' 00:06:43.433 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.433 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.433 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.434 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.434 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3715328) - No such process 00:06:43.434 ERROR: process (pid: 3715328) is no longer running 00:06:43.434 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.434 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:43.434 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:43.434 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.434 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:43.434 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.434 10:21:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:43.434 10:21:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:43.434 10:21:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:43.434 10:21:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:43.434 00:06:43.434 real 0m1.503s 00:06:43.434 user 0m1.542s 00:06:43.434 sys 0m0.503s 00:06:43.434 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.434 10:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.434 ************************************ 00:06:43.434 END TEST default_locks 00:06:43.434 ************************************ 00:06:43.434 10:21:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:43.434 10:21:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.434 10:21:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.434 10:21:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.434 ************************************ 00:06:43.434 START TEST default_locks_via_rpc 00:06:43.434 ************************************ 00:06:43.434 10:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:43.434 10:21:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3715620 00:06:43.434 10:21:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3715620 00:06:43.434 10:21:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.434 10:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3715620 ']' 00:06:43.434 10:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.434 10:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.434 10:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.434 10:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.434 10:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.434 [2024-07-25 10:21:47.074732] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:43.434 [2024-07-25 10:21:47.074775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715620 ] 00:06:43.434 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.692 [2024-07-25 10:21:47.143785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.692 [2024-07-25 10:21:47.209416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.259 10:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.259 10:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:44.259 10:21:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:44.259 10:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.259 10:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.259 10:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.259 10:21:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:44.259 10:21:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:44.259 10:21:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:44.259 10:21:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:44.260 10:21:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:44.260 10:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.260 10:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.260 10:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.260 10:21:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3715620 00:06:44.260 10:21:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3715620 00:06:44.260 10:21:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.827 10:21:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3715620 00:06:44.827 10:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3715620 ']' 00:06:44.827 10:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3715620 00:06:44.827 10:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:44.828 10:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.828 10:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3715620 00:06:44.828 10:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.828 10:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.828 10:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3715620' 00:06:44.828 killing process with pid 3715620 00:06:44.828 10:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3715620 00:06:44.828 10:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3715620 00:06:45.087 00:06:45.087 real 0m1.573s 00:06:45.087 user 0m1.643s 00:06:45.087 sys 0m0.546s 00:06:45.087 10:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.087 10:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.087 ************************************ 00:06:45.087 END TEST default_locks_via_rpc 00:06:45.087 ************************************ 00:06:45.087 10:21:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:45.087 10:21:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.087 10:21:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.087 10:21:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.087 ************************************ 00:06:45.087 START TEST non_locking_app_on_locked_coremask 00:06:45.087 ************************************ 00:06:45.087 10:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:45.087 10:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3715930 00:06:45.087 10:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3715930 /var/tmp/spdk.sock 00:06:45.087 10:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.087 10:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3715930 ']' 00:06:45.087 10:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.087 10:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.087 10:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.087 10:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.087 10:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.087 [2024-07-25 10:21:48.726532] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:45.087 [2024-07-25 10:21:48.726586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715930 ] 00:06:45.087 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.346 [2024-07-25 10:21:48.796436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.346 [2024-07-25 10:21:48.869774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.913 10:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.913 10:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:45.913 10:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:45.913 10:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3716178 00:06:45.913 10:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3716178 /var/tmp/spdk2.sock 00:06:45.913 10:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3716178 ']' 00:06:45.913 10:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.913 10:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.913 10:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.913 10:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.913 10:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.913 [2024-07-25 10:21:49.549565] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:45.913 [2024-07-25 10:21:49.549618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716178 ] 00:06:45.913 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.172 [2024-07-25 10:21:49.645364] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.172 [2024-07-25 10:21:49.645389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.172 [2024-07-25 10:21:49.793843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.738 10:21:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.738 10:21:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:46.739 10:21:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3715930 00:06:46.739 10:21:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3715930 00:06:46.739 10:21:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.677 lslocks: write error 00:06:47.677 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3715930 00:06:47.677 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3715930 ']' 00:06:47.677 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3715930 00:06:47.677 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:47.677 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.677 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3715930 00:06:47.677 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.677 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.677 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3715930' 00:06:47.677 killing process with pid 3715930 00:06:47.677 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3715930 00:06:47.677 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3715930 00:06:48.265 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3716178 00:06:48.265 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3716178 ']' 00:06:48.265 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3716178 00:06:48.265 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:48.265 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.265 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3716178 00:06:48.525 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.525 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.525 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3716178' 00:06:48.525 killing process with pid 3716178 00:06:48.525 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3716178 00:06:48.525 10:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3716178 00:06:48.784 00:06:48.784 real 0m3.615s 00:06:48.784 user 0m3.875s 00:06:48.784 sys 0m1.125s 00:06:48.784 10:21:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.784 10:21:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.784 ************************************ 00:06:48.784 END TEST non_locking_app_on_locked_coremask 00:06:48.784 ************************************ 00:06:48.784 10:21:52 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:48.784 10:21:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.784 10:21:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.784 10:21:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.784 ************************************ 00:06:48.784 START TEST locking_app_on_unlocked_coremask 00:06:48.784 ************************************ 00:06:48.784 10:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:48.784 10:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3716741 00:06:48.784 10:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3716741 /var/tmp/spdk.sock 00:06:48.784 10:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:48.784 10:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3716741 ']' 00:06:48.784 10:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.784 10:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.784 10:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.784 10:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.784 10:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.784 [2024-07-25 10:21:52.419168] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:48.784 [2024-07-25 10:21:52.419211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716741 ] 00:06:48.784 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.784 [2024-07-25 10:21:52.487206] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.784 [2024-07-25 10:21:52.487238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.044 [2024-07-25 10:21:52.560011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.610 10:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.610 10:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:49.611 10:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3716752 00:06:49.611 10:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3716752 /var/tmp/spdk2.sock 00:06:49.611 10:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.611 10:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3716752 ']' 00:06:49.611 10:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.611 10:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.611 10:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.611 10:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.611 10:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.611 [2024-07-25 10:21:53.257554] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:49.611 [2024-07-25 10:21:53.257602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716752 ] 00:06:49.611 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.879 [2024-07-25 10:21:53.355830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.879 [2024-07-25 10:21:53.492304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.454 10:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.454 10:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:50.454 10:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3716752 00:06:50.454 10:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3716752 00:06:50.454 10:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.833 lslocks: write error 00:06:51.833 10:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3716741 00:06:51.833 10:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3716741 ']' 00:06:51.833 10:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3716741 00:06:51.833 10:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:51.833 10:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.833 10:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3716741 00:06:51.833 10:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.833 10:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.833 10:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3716741' 00:06:51.833 killing process with pid 3716741 00:06:51.833 10:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3716741 00:06:51.833 10:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3716741 00:06:52.401 10:21:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3716752 00:06:52.401 10:21:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3716752 ']' 00:06:52.401 10:21:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3716752 00:06:52.401 10:21:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:52.401 10:21:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.401 10:21:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3716752 00:06:52.402 10:21:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.402 10:21:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.402 10:21:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3716752' 00:06:52.402 killing process with pid 3716752 00:06:52.402 10:21:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3716752 00:06:52.402 10:21:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3716752 00:06:52.971 00:06:52.971 real 0m4.010s 00:06:52.971 user 0m4.288s 00:06:52.971 sys 0m1.323s 00:06:52.971 10:21:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.971 10:21:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.971 ************************************ 00:06:52.971 END TEST locking_app_on_unlocked_coremask 00:06:52.971 ************************************ 00:06:52.971 10:21:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:52.971 10:21:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.971 10:21:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.971 10:21:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.971 ************************************ 00:06:52.971 START TEST locking_app_on_locked_coremask 00:06:52.971 ************************************ 00:06:52.971 10:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:52.971 10:21:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3717337 00:06:52.971 10:21:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3717337 /var/tmp/spdk.sock 00:06:52.971 10:21:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.971 10:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3717337 ']' 00:06:52.971 10:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.971 10:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.971 10:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.971 10:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.971 10:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.971 [2024-07-25 10:21:56.506147] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:52.971 [2024-07-25 10:21:56.506191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717337 ] 00:06:52.971 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.971 [2024-07-25 10:21:56.575015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.971 [2024-07-25 10:21:56.643347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3717575 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3717575 /var/tmp/spdk2.sock 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3717575 /var/tmp/spdk2.sock 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3717575 /var/tmp/spdk2.sock 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3717575 ']' 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.909 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.909 [2024-07-25 10:21:57.343928] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:53.909 [2024-07-25 10:21:57.343977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717575 ] 00:06:53.909 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.909 [2024-07-25 10:21:57.439167] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3717337 has claimed it. 00:06:53.909 [2024-07-25 10:21:57.439211] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:54.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3717575) - No such process 00:06:54.478 ERROR: process (pid: 3717575) is no longer running 00:06:54.478 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.478 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:54.478 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:54.478 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.478 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.478 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.478 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3717337 00:06:54.478 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3717337 00:06:54.478 10:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.737 lslocks: write error 00:06:54.737 10:21:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3717337 00:06:54.737 10:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3717337 ']' 00:06:54.737 10:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3717337 00:06:54.737 10:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:54.737 10:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.737 10:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3717337 00:06:55.026 10:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.026 10:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.026 10:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3717337' 00:06:55.026 killing process with pid 3717337 00:06:55.026 10:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3717337 00:06:55.026 10:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3717337 00:06:55.286 00:06:55.286 real 0m2.306s 00:06:55.286 user 0m2.506s 00:06:55.286 sys 0m0.712s 00:06:55.286 10:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.286 10:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.286 ************************************ 00:06:55.286 END TEST locking_app_on_locked_coremask 00:06:55.287 ************************************ 00:06:55.287 10:21:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:55.287 10:21:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.287 10:21:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.287 10:21:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.287 ************************************ 00:06:55.287 START TEST locking_overlapped_coremask 00:06:55.287 ************************************ 00:06:55.287 10:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:55.287 10:21:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3717875 00:06:55.287 10:21:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3717875 /var/tmp/spdk.sock 00:06:55.287 10:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3717875 ']' 00:06:55.287 10:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.287 10:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.287 10:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.287 10:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.287 10:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.287 10:21:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:55.287 [2024-07-25 10:21:58.884858] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:55.287 [2024-07-25 10:21:58.884905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717875 ] 00:06:55.287 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.287 [2024-07-25 10:21:58.953185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.547 [2024-07-25 10:21:59.028602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.547 [2024-07-25 10:21:59.028699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.547 [2024-07-25 10:21:59.028702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3717977 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3717977 /var/tmp/spdk2.sock 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3717977 /var/tmp/spdk2.sock 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3717977 /var/tmp/spdk2.sock 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3717977 ']' 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.116 10:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.116 [2024-07-25 10:21:59.742998] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:56.116 [2024-07-25 10:21:59.743049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717977 ] 00:06:56.116 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.374 [2024-07-25 10:21:59.844114] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3717875 has claimed it. 00:06:56.374 [2024-07-25 10:21:59.844158] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:56.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3717977) - No such process 00:06:56.942 ERROR: process (pid: 3717977) is no longer running 00:06:56.942 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.942 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:56.942 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:56.942 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.942 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.942 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.942 10:22:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:56.942 10:22:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:56.942 10:22:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:56.942 10:22:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:56.942 10:22:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3717875 00:06:56.943 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3717875 ']' 00:06:56.943 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3717875 00:06:56.943 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:56.943 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.943 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3717875 00:06:56.943 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.943 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.943 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3717875' 00:06:56.943 killing process with pid 3717875 00:06:56.943 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3717875 00:06:56.943 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3717875 00:06:57.202 00:06:57.202 real 0m1.903s 00:06:57.202 user 0m5.319s 00:06:57.202 sys 0m0.479s 00:06:57.202 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.202 10:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.202 ************************************ 00:06:57.202 END TEST locking_overlapped_coremask 00:06:57.202 ************************************ 00:06:57.202 10:22:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:57.202 10:22:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.202 10:22:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.202 10:22:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.202 ************************************ 00:06:57.202 START TEST locking_overlapped_coremask_via_rpc 00:06:57.202 ************************************ 00:06:57.202 10:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:57.202 10:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3718183 00:06:57.202 10:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3718183 /var/tmp/spdk.sock 00:06:57.202 10:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:57.202 10:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3718183 ']' 00:06:57.202 10:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.202 10:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.202 10:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.202 10:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.202 10:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.202 [2024-07-25 10:22:00.867815] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:57.202 [2024-07-25 10:22:00.867859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718183 ] 00:06:57.202 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.461 [2024-07-25 10:22:00.936659] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.461 [2024-07-25 10:22:00.936689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.461 [2024-07-25 10:22:01.002187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.461 [2024-07-25 10:22:01.002284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.461 [2024-07-25 10:22:01.002286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.030 10:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.030 10:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:58.030 10:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3718447 00:06:58.030 10:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3718447 /var/tmp/spdk2.sock 00:06:58.030 10:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:58.030 10:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3718447 ']' 00:06:58.030 10:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.030 10:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.030 10:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.030 10:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.030 10:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.030 [2024-07-25 10:22:01.707414] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:06:58.030 [2024-07-25 10:22:01.707466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718447 ] 00:06:58.289 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.289 [2024-07-25 10:22:01.804482] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.289 [2024-07-25 10:22:01.804517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.289 [2024-07-25 10:22:01.947905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.289 [2024-07-25 10:22:01.948024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.289 [2024-07-25 10:22:01.948025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.858 [2024-07-25 10:22:02.526794] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3718183 has claimed it. 00:06:58.858 request: 00:06:58.858 { 00:06:58.858 "method": "framework_enable_cpumask_locks", 00:06:58.858 "req_id": 1 00:06:58.858 } 00:06:58.858 Got JSON-RPC error response 00:06:58.858 response: 00:06:58.858 { 00:06:58.858 "code": -32603, 00:06:58.858 "message": "Failed to claim CPU core: 2" 00:06:58.858 } 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3718183 /var/tmp/spdk.sock 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3718183 ']' 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.858 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.117 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.117 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:59.117 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3718447 /var/tmp/spdk2.sock 00:06:59.117 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3718447 ']' 00:06:59.117 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.118 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.118 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.118 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.118 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.377 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.377 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:59.377 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:59.377 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:59.377 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:59.377 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:59.377 00:06:59.377 real 0m2.102s 00:06:59.377 user 0m0.826s 00:06:59.377 sys 0m0.206s 00:06:59.378 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.378 10:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.378 ************************************ 00:06:59.378 END TEST locking_overlapped_coremask_via_rpc 00:06:59.378 ************************************ 00:06:59.378 10:22:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:59.378 10:22:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3718183 ]] 00:06:59.378 10:22:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3718183 00:06:59.378 10:22:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3718183 ']' 00:06:59.378 10:22:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3718183 00:06:59.378 10:22:02 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:59.378 10:22:02 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.378 10:22:02 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3718183 00:06:59.378 10:22:03 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.378 10:22:03 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.378 10:22:03 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3718183' 00:06:59.378 killing process with pid 3718183 00:06:59.378 10:22:03 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3718183 00:06:59.378 10:22:03 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3718183 00:06:59.637 10:22:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3718447 ]] 00:06:59.637 10:22:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3718447 00:06:59.637 10:22:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3718447 ']' 00:06:59.637 10:22:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3718447 00:06:59.637 10:22:03 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:59.637 10:22:03 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.637 10:22:03 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3718447 00:06:59.896 10:22:03 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:59.896 10:22:03 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:59.896 10:22:03 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3718447' 00:06:59.896 killing process with pid 3718447 00:06:59.896 10:22:03 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3718447 00:06:59.896 10:22:03 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3718447 00:07:00.155 10:22:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.155 10:22:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:00.155 10:22:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3718183 ]] 00:07:00.155 10:22:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3718183 00:07:00.155 10:22:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3718183 ']' 00:07:00.155 10:22:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3718183 00:07:00.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3718183) - No such process 00:07:00.156 10:22:03 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3718183 is not found' 00:07:00.156 Process with pid 3718183 is not found 00:07:00.156 10:22:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3718447 ]] 00:07:00.156 10:22:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3718447 00:07:00.156 10:22:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3718447 ']' 00:07:00.156 10:22:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3718447 00:07:00.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3718447) - No such process 00:07:00.156 10:22:03 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3718447 is not found' 00:07:00.156 Process with pid 3718447 is not found 00:07:00.156 10:22:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.156 00:07:00.156 real 0m18.418s 00:07:00.156 user 0m30.617s 00:07:00.156 sys 0m5.933s 00:07:00.156 10:22:03 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.156 10:22:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.156 ************************************ 00:07:00.156 END TEST cpu_locks 00:07:00.156 ************************************ 00:07:00.156 00:07:00.156 real 0m43.806s 00:07:00.156 user 1m21.157s 00:07:00.156 sys 0m10.111s 00:07:00.156 10:22:03 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.156 10:22:03 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.156 ************************************ 00:07:00.156 END TEST event 00:07:00.156 ************************************ 00:07:00.156 10:22:03 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:00.156 10:22:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.156 10:22:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.156 10:22:03 -- common/autotest_common.sh@10 -- # set +x 00:07:00.156 ************************************ 00:07:00.156 START TEST thread 00:07:00.156 ************************************ 00:07:00.156 10:22:03 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:00.415 * Looking for test storage... 00:07:00.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:00.415 10:22:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.415 10:22:03 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:00.415 10:22:03 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.415 10:22:03 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.415 ************************************ 00:07:00.415 START TEST thread_poller_perf 00:07:00.415 ************************************ 00:07:00.415 10:22:03 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.415 [2024-07-25 10:22:04.003731] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:00.415 [2024-07-25 10:22:04.003798] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718818 ] 00:07:00.415 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.415 [2024-07-25 10:22:04.074557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.674 [2024-07-25 10:22:04.143754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.675 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:01.612 ====================================== 00:07:01.612 busy:2506574434 (cyc) 00:07:01.612 total_run_count: 432000 00:07:01.612 tsc_hz: 2500000000 (cyc) 00:07:01.612 ====================================== 00:07:01.612 poller_cost: 5802 (cyc), 2320 (nsec) 00:07:01.612 00:07:01.612 real 0m1.231s 00:07:01.612 user 0m1.141s 00:07:01.612 sys 0m0.087s 00:07:01.612 10:22:05 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.612 10:22:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.612 ************************************ 00:07:01.612 END TEST thread_poller_perf 00:07:01.612 ************************************ 00:07:01.612 10:22:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.612 10:22:05 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:01.612 10:22:05 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.612 10:22:05 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.612 ************************************ 00:07:01.612 START TEST thread_poller_perf 00:07:01.612 ************************************ 00:07:01.612 10:22:05 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.612 [2024-07-25 10:22:05.315195] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:01.612 [2024-07-25 10:22:05.315276] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719099 ] 00:07:01.871 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.871 [2024-07-25 10:22:05.386309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.871 [2024-07-25 10:22:05.453869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.871 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:03.251 ====================================== 00:07:03.251 busy:2501620284 (cyc) 00:07:03.251 total_run_count: 5647000 00:07:03.251 tsc_hz: 2500000000 (cyc) 00:07:03.251 ====================================== 00:07:03.251 poller_cost: 442 (cyc), 176 (nsec) 00:07:03.251 00:07:03.251 real 0m1.226s 00:07:03.251 user 0m1.132s 00:07:03.251 sys 0m0.090s 00:07:03.251 10:22:06 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.251 10:22:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.251 ************************************ 00:07:03.251 END TEST thread_poller_perf 00:07:03.251 ************************************ 00:07:03.251 10:22:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:03.251 00:07:03.251 real 0m2.727s 00:07:03.251 user 0m2.377s 00:07:03.251 sys 0m0.361s 00:07:03.251 10:22:06 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.251 10:22:06 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.251 ************************************ 00:07:03.251 END TEST thread 00:07:03.251 ************************************ 00:07:03.251 10:22:06 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:03.251 10:22:06 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:03.251 10:22:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.251 10:22:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.251 10:22:06 -- common/autotest_common.sh@10 -- # set +x 00:07:03.251 ************************************ 00:07:03.251 START TEST app_cmdline 00:07:03.251 ************************************ 00:07:03.251 10:22:06 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:03.251 * Looking for test storage... 00:07:03.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:03.251 10:22:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:03.251 10:22:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3719426 00:07:03.251 10:22:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3719426 00:07:03.251 10:22:06 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:03.251 10:22:06 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3719426 ']' 00:07:03.251 10:22:06 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.251 10:22:06 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.251 10:22:06 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.251 10:22:06 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.251 10:22:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.251 [2024-07-25 10:22:06.796600] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:03.251 [2024-07-25 10:22:06.796648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719426 ] 00:07:03.251 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.251 [2024-07-25 10:22:06.865514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.251 [2024-07-25 10:22:06.934566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:04.189 10:22:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:04.189 { 00:07:04.189 "version": "SPDK v24.09-pre git sha1 6f18624d4", 00:07:04.189 "fields": { 00:07:04.189 "major": 24, 00:07:04.189 "minor": 9, 00:07:04.189 "patch": 0, 00:07:04.189 "suffix": "-pre", 00:07:04.189 "commit": "6f18624d4" 00:07:04.189 } 00:07:04.189 } 00:07:04.189 10:22:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:04.189 10:22:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:04.189 10:22:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:04.189 10:22:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:04.189 10:22:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.189 10:22:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.189 10:22:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.189 10:22:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:04.189 10:22:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:04.189 10:22:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:04.189 10:22:07 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.449 request: 00:07:04.449 { 00:07:04.449 "method": "env_dpdk_get_mem_stats", 00:07:04.449 "req_id": 1 00:07:04.449 } 00:07:04.449 Got JSON-RPC error response 00:07:04.449 response: 00:07:04.449 { 00:07:04.449 "code": -32601, 00:07:04.449 "message": "Method not found" 00:07:04.449 } 00:07:04.449 10:22:07 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:04.449 10:22:07 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.449 10:22:07 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.449 10:22:07 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.449 10:22:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3719426 00:07:04.449 10:22:07 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3719426 ']' 00:07:04.449 10:22:07 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3719426 00:07:04.449 10:22:07 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:04.449 10:22:07 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.449 10:22:07 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3719426 00:07:04.449 10:22:07 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.449 10:22:08 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.449 10:22:08 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3719426' 00:07:04.449 killing process with pid 3719426 00:07:04.449 10:22:08 app_cmdline -- common/autotest_common.sh@969 -- # kill 3719426 00:07:04.449 10:22:08 app_cmdline -- common/autotest_common.sh@974 -- # wait 3719426 00:07:04.709 00:07:04.709 real 0m1.663s 00:07:04.709 user 0m1.926s 00:07:04.709 sys 0m0.479s 00:07:04.709 10:22:08 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.709 10:22:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.709 ************************************ 00:07:04.709 END TEST app_cmdline 00:07:04.709 ************************************ 00:07:04.709 10:22:08 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:04.709 10:22:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.709 10:22:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.709 10:22:08 -- common/autotest_common.sh@10 -- # set +x 00:07:04.709 ************************************ 00:07:04.709 START TEST version 00:07:04.709 ************************************ 00:07:04.709 10:22:08 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:04.968 * Looking for test storage... 00:07:04.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:04.968 10:22:08 version -- app/version.sh@17 -- # get_header_version major 00:07:04.968 10:22:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:04.968 10:22:08 version -- app/version.sh@14 -- # cut -f2 00:07:04.968 10:22:08 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.968 10:22:08 version -- app/version.sh@17 -- # major=24 00:07:04.968 10:22:08 version -- app/version.sh@18 -- # get_header_version minor 00:07:04.968 10:22:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:04.968 10:22:08 version -- app/version.sh@14 -- # cut -f2 00:07:04.968 10:22:08 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.968 10:22:08 version -- app/version.sh@18 -- # minor=9 00:07:04.968 10:22:08 version -- app/version.sh@19 -- # get_header_version patch 00:07:04.968 10:22:08 version -- app/version.sh@14 -- # cut -f2 00:07:04.968 10:22:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:04.968 10:22:08 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.968 10:22:08 version -- app/version.sh@19 -- # patch=0 00:07:04.968 10:22:08 version -- app/version.sh@20 -- # get_header_version suffix 00:07:04.968 10:22:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:04.968 10:22:08 version -- app/version.sh@14 -- # cut -f2 00:07:04.968 10:22:08 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.968 10:22:08 version -- app/version.sh@20 -- # suffix=-pre 00:07:04.968 10:22:08 version -- app/version.sh@22 -- # version=24.9 00:07:04.968 10:22:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:04.968 10:22:08 version -- app/version.sh@28 -- # version=24.9rc0 00:07:04.968 10:22:08 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:04.968 10:22:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:04.968 10:22:08 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:04.968 10:22:08 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:04.968 00:07:04.968 real 0m0.182s 00:07:04.968 user 0m0.088s 00:07:04.968 sys 0m0.133s 00:07:04.968 10:22:08 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.968 10:22:08 version -- common/autotest_common.sh@10 -- # set +x 00:07:04.968 ************************************ 00:07:04.968 END TEST version 00:07:04.968 ************************************ 00:07:04.968 10:22:08 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:04.968 10:22:08 -- spdk/autotest.sh@202 -- # uname -s 00:07:04.968 10:22:08 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:04.968 10:22:08 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:04.968 10:22:08 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:04.968 10:22:08 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:04.968 10:22:08 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:04.968 10:22:08 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:04.968 10:22:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:04.968 10:22:08 -- common/autotest_common.sh@10 -- # set +x 00:07:04.968 10:22:08 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:04.968 10:22:08 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:04.968 10:22:08 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:04.968 10:22:08 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:04.968 10:22:08 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:04.968 10:22:08 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:04.968 10:22:08 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:04.968 10:22:08 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:04.968 10:22:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.968 10:22:08 -- common/autotest_common.sh@10 -- # set +x 00:07:05.228 ************************************ 00:07:05.228 START TEST nvmf_tcp 00:07:05.228 ************************************ 00:07:05.228 10:22:08 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:05.228 * Looking for test storage... 00:07:05.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:05.228 10:22:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:05.228 10:22:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:05.228 10:22:08 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:05.228 10:22:08 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:05.228 10:22:08 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.228 10:22:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.228 ************************************ 00:07:05.228 START TEST nvmf_target_core 00:07:05.228 ************************************ 00:07:05.228 10:22:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:05.228 * Looking for test storage... 00:07:05.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:05.228 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.488 10:22:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:05.488 ************************************ 00:07:05.488 START TEST nvmf_abort 00:07:05.488 ************************************ 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:05.488 * Looking for test storage... 00:07:05.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.488 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.489 10:22:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:12.130 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:12.131 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:12.131 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:12.131 Found net devices under 0000:af:00.0: cvl_0_0 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:12.131 Found net devices under 0000:af:00.1: cvl_0_1 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:12.131 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:12.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:07:12.391 00:07:12.391 --- 10.0.0.2 ping statistics --- 00:07:12.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.391 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:12.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:07:12.391 00:07:12.391 --- 10.0.0.1 ping statistics --- 00:07:12.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.391 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3723229 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3723229 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3723229 ']' 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.391 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:12.391 [2024-07-25 10:22:15.945597] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:12.391 [2024-07-25 10:22:15.945643] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.391 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.391 [2024-07-25 10:22:16.019456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.651 [2024-07-25 10:22:16.095403] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.651 [2024-07-25 10:22:16.095439] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.651 [2024-07-25 10:22:16.095448] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.651 [2024-07-25 10:22:16.095456] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.651 [2024-07-25 10:22:16.095463] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.651 [2024-07-25 10:22:16.095561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.651 [2024-07-25 10:22:16.095651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.651 [2024-07-25 10:22:16.095654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 [2024-07-25 10:22:16.803063] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 Malloc0 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 Delay0 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 [2024-07-25 10:22:16.878215] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.268 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:13.268 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.268 [2024-07-25 10:22:16.944971] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:15.805 Initializing NVMe Controllers 00:07:15.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:15.805 controller IO queue size 128 less than required 00:07:15.805 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:15.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:15.805 Initialization complete. Launching workers. 00:07:15.805 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 40730 00:07:15.805 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 40791, failed to submit 62 00:07:15.805 success 40734, unsuccess 57, failed 0 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:15.805 rmmod nvme_tcp 00:07:15.805 rmmod nvme_fabrics 00:07:15.805 rmmod nvme_keyring 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3723229 ']' 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3723229 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3723229 ']' 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3723229 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3723229 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3723229' 00:07:15.805 killing process with pid 3723229 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3723229 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3723229 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:15.805 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:15.806 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:15.806 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.806 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.806 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:18.343 00:07:18.343 real 0m12.432s 00:07:18.343 user 0m13.075s 00:07:18.343 sys 0m6.352s 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.343 ************************************ 00:07:18.343 END TEST nvmf_abort 00:07:18.343 ************************************ 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.343 ************************************ 00:07:18.343 START TEST nvmf_ns_hotplug_stress 00:07:18.343 ************************************ 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:18.343 * Looking for test storage... 00:07:18.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.343 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:18.344 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.912 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:24.912 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:24.912 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:24.912 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:24.912 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:24.912 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:24.912 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:24.912 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:24.913 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:24.913 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:24.913 Found net devices under 0000:af:00.0: cvl_0_0 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:24.913 Found net devices under 0000:af:00.1: cvl_0_1 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:24.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:07:24.913 00:07:24.913 --- 10.0.0.2 ping statistics --- 00:07:24.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.913 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:24.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:07:24.913 00:07:24.913 --- 10.0.0.1 ping statistics --- 00:07:24.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.913 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3727465 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3727465 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3727465 ']' 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.913 10:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:24.913 [2024-07-25 10:22:28.480037] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:07:24.913 [2024-07-25 10:22:28.480084] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.913 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.913 [2024-07-25 10:22:28.553040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.172 [2024-07-25 10:22:28.627093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.173 [2024-07-25 10:22:28.627128] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.173 [2024-07-25 10:22:28.627137] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.173 [2024-07-25 10:22:28.627146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.173 [2024-07-25 10:22:28.627153] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.173 [2024-07-25 10:22:28.627199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.173 [2024-07-25 10:22:28.627281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.173 [2024-07-25 10:22:28.627283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.739 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.739 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:25.739 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:25.739 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.739 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:25.739 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.739 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:25.739 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:25.997 [2024-07-25 10:22:29.502763] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.997 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:26.256 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.256 [2024-07-25 10:22:29.883787] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.256 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:26.517 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:26.776 Malloc0 00:07:26.776 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:26.776 Delay0 00:07:26.776 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.035 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:27.295 NULL1 00:07:27.295 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:27.295 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3727857 00:07:27.295 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:27.295 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:27.295 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.553 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.553 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.830 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:27.830 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:27.830 true 00:07:28.090 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:28.090 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.090 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.349 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:28.349 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:28.607 true 00:07:28.607 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:28.607 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.607 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.867 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:28.867 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:29.127 true 00:07:29.127 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:29.127 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.127 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.386 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:29.386 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:29.645 true 00:07:29.645 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:29.645 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.903 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.903 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:29.903 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:30.162 true 00:07:30.162 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:30.162 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.421 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.421 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:30.421 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:30.680 true 00:07:30.680 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:30.680 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.090 Read completed with error (sct=0, sc=11) 00:07:32.090 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.090 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:32.090 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:32.349 true 00:07:32.349 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:32.349 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.286 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.286 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:33.286 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:33.545 true 00:07:33.545 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:33.545 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.545 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.804 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:33.804 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:34.063 true 00:07:34.063 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:34.063 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.322 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.322 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:34.322 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:34.581 true 00:07:34.581 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:34.581 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.841 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.841 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:34.841 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:35.100 true 00:07:35.100 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:35.100 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.479 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.479 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:36.479 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:36.479 true 00:07:36.738 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:36.739 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.676 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.676 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:37.676 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:37.676 true 00:07:37.935 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:37.935 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.935 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.194 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:38.194 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:38.454 true 00:07:38.454 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:38.454 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.454 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.713 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:38.713 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:38.973 true 00:07:38.973 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:38.973 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.910 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.910 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.910 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.910 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:39.910 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:40.169 true 00:07:40.169 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:40.169 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.428 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.428 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:40.428 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:40.687 true 00:07:40.687 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:40.687 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.953 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.953 [2024-07-25 10:22:44.601509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.601588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.601635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.601678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.601732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.601795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.601846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.601890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.601934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.601983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.602999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.603039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.603076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.603117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.603163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.603203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.603240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.603278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.603317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.603359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.603405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.953 [2024-07-25 10:22:44.603451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.603501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.603548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.603593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.603636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.603682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.603728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.603777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.603820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.603860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.603904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.603946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.603988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.604035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.604077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.604125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.604170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.604217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.604268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.604779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.604831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.604877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.604923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.604967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.605985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.606969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.607006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.607051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.607089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.607125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.607168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.607213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.607255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.607300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.607348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.607396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.607886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.607938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.607985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.608027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.608069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.608117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.608155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.608186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.608229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.608273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.608310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.608353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.608393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.608432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.954 [2024-07-25 10:22:44.608471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.608511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.608546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.608583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.608622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.608665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.608704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.608753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.608792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.608839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.608873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.608912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.608954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.608994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.609995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.610039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.610090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.610130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.610176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.610220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.610268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.610308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.610351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.610394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.610441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.610482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.610527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.610571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:40.955 [2024-07-25 10:22:44.611268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.611962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.955 [2024-07-25 10:22:44.612882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.612923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.612967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.613644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.614971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.615966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.616884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.617371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.617410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.617452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.617492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.617538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.617576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.617623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.617661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.617706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.617752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.617793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.617833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.617870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.956 [2024-07-25 10:22:44.617911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.617949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.617992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.618997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.619975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.620013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.620056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.620603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.620651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.620692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.620736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.620776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.620819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.620855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.620893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.620939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.620983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.621978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.622022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.622071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.622117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.622167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.622205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.622242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.622284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.622323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.622363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.622402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.622444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.622483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.622524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.957 [2024-07-25 10:22:44.622564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.622611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.622642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.622684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.622730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.622768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.622811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.622853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.622895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.622936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.622976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.623008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.623046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.623089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.623136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.623185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.623227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.623267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.623310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.623354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.623852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.623904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.623951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.623996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.624957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.625964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.626003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.626049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.626089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.626124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.626164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.626208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.626247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.626296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.626336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.626381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.626421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.626462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.626504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.958 [2024-07-25 10:22:44.626542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.626581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.626625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.627984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.628954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.629974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.630023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:40.959 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:40.959 [2024-07-25 10:22:44.630489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.630535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.630576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.630625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.630665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.630706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.630751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.630796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.630837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.630879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.630913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.630960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.630998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.959 [2024-07-25 10:22:44.631660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.631708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.631765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.631810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.631859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.631906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.631954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.631998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.632968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.633011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.633055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.633089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.633132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.633172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.633213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.633822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.633865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.633911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.633957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.633994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.634957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.635986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.960 [2024-07-25 10:22:44.636650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.636690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.637999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.638968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.639963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.640435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.640483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.640529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.640574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.640621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.640669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.640721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.640768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.640814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.640862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.640910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.640954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.961 [2024-07-25 10:22:44.641863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.641899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.641936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.641980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.642987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.643036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.643086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.643134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.643180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.643226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.643269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.643754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.643798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.643840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.643880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.643912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.643953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.643994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.644963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.962 [2024-07-25 10:22:44.645980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.646025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.646077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.646116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.646158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.646205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.646250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.646289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.646333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.646376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.646416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.646907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.646955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.646998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.647985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.648991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.649031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.649081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.649118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.649155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.649202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.649242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.649286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.649325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.649356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.649398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.649443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.649487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.649526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.650063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:40.963 [2024-07-25 10:22:44.650113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.251 [2024-07-25 10:22:44.650159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.251 [2024-07-25 10:22:44.650210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.251 [2024-07-25 10:22:44.650259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.251 [2024-07-25 10:22:44.650307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.251 [2024-07-25 10:22:44.650354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.650399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.650445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.650480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.650526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.650564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.650603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.650644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.650685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.650723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.650769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.650810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.650849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.650891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.650931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.650974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.651992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.652856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.653343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.653393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.653442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.653488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.653537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.653581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.653631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.653677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.653733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.653780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.653827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.653874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.653922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.653976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.654998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.655031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.655069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.655111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.252 [2024-07-25 10:22:44.655153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.655995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.656037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.656080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.656121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.656160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.656649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.656709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.656765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.656799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.656843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.656887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.656927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.656966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.657983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.658959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.659006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.659055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.659099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.659142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.659187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.659232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.659281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.659333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.659384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.659428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.659473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.659923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.659963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.660000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.660045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.660092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.660135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.660174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.660215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.660257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.660289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.253 [2024-07-25 10:22:44.660330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.660369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.660421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.660467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.660509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.660554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.660596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.660639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.660684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.660733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.660787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.660835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.660884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.660931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.660980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.661989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.662776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:41.254 [2024-07-25 10:22:44.663422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.663982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.664998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.665042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.665088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.665129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.254 [2024-07-25 10:22:44.665172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.665986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.666463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.666512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.666561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.666607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.666655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.666701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.666755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.666803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.666855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.666902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.666948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.666995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.667965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.668969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.669013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.669057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.669100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.669140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.669174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.669215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.669257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.669297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.669836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.669888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.669932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.669978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.670027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.670075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.670119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.670166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.670212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.670260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.670308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.670354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.670388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.255 [2024-07-25 10:22:44.670431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.670472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.670511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.670549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.670589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.670621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.670666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.670703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.670746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.670788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.670827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.670866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.670905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.670952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.670998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.671951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.672001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.672049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.672096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.672142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.672190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.672237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.672282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.672330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.672377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.672427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.672479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.672527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.672570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.672618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.673968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.674010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.674056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.674102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.674149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.674196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.256 [2024-07-25 10:22:44.674245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.674290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.674335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.674380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.674424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.674473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.674519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.674573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.674622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.674668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.674720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.674768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.674816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.674865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.674911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.674959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.675871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.676409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.676459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.676503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.676543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.676583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.676624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.676673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.676721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.676769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.676815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.676861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.676910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.676960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.677974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.678984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.679025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.679070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.257 [2024-07-25 10:22:44.679113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.679159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.679637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.679683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.679734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.679783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.679827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.679875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.679922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.679968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.680954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.681990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.682032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.682071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.682119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.682158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.682205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.682237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.682278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.682319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.682358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.682397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.682915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.682965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.683956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.684000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.684039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.684080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.684122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.684163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.684202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.684243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.258 [2024-07-25 10:22:44.684285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.684325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.684367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.684414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.684459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.684506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.684552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.684598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.684647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.684698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.684747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.684799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.684846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.684891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.684935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.684981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.685026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.685073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.685122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.685166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.685216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.685275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.685320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.685366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.685412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.685462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.685509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.685552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.685599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.685648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.685696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.686990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.687998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.688958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.689505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.689553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.259 [2024-07-25 10:22:44.689592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.689631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.689676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.689729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.689772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.689820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.689870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.689923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.689976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.690958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.691966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.692006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.692046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.692089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.692128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.692169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.692214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.692275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.692753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.692803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.692849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.692896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.692943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.692989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.693964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.694013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.694056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.694106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.694150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.694198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.694239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.694282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.694323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.694361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.694402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.260 [2024-07-25 10:22:44.694442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.694487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.694529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.694576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.694618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.694665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.694708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.694756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.694802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.694844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.694875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.694918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.694962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.695002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.695043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.695091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.695131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.695169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.695212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.695251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.695288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.695328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.695372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.695413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.695456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.695498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.695538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.261 [2024-07-25 10:22:44.696830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.696872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.696913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.696956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.696998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.697975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.698869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.699987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.700972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.262 [2024-07-25 10:22:44.701662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.701701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.701747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.701786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.701829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.701866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.701908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.701947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.701989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.702038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.702072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.702658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.702703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.702746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.702796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.702840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.702884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.702930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.702976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.703981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.704995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.705033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.705076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.705114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.705151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.705194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.705233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.705273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.705310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.705349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.705394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.705437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.705924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.705974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.706965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.707014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.263 [2024-07-25 10:22:44.707063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.707976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.708706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.709973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.710979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.264 [2024-07-25 10:22:44.711724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.711772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.711826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.711883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.711927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.711974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.712021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.712070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.712545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.712592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.712641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.712674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.712718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.712764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.712807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.712855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.712896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.712939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.712978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.713979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.714981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.715024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.715072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.715116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.715163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.715206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.715251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.715297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.715349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:41.265 [2024-07-25 10:22:44.715827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.715873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.715914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.715957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.715989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.265 [2024-07-25 10:22:44.716691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.716736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.716768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.716815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.716854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.716898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.716944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.716993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.717989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.718038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.718083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.718132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.718181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.718226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.718276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.718323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.718369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.718418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.718464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.718511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.718558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.718611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.719994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.266 [2024-07-25 10:22:44.720690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.720734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.720781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.720819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.720861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.720899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.720942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.720987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.721806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.722993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.723977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.724021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.724072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.724117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.724163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.724213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.724260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.724309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.724355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.724404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.724450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.724495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.724541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.724584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.724628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.267 [2024-07-25 10:22:44.724677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.724730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.724778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.724832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.724884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.724930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.724973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.725015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.725056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.725097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.725139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.725606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.725650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.725696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.725744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.725777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.725816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.725860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.725900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.725937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.725981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.726967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.727989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.728027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.728066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.728098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.728139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.728179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.728230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.728271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.728321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.728361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.728399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.728867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.728912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.728953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.268 [2024-07-25 10:22:44.729955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.729999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.730952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.731707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.732991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.269 [2024-07-25 10:22:44.733811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.733861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.733905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.733948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.733993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.734903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.735378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.735427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.735471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.735514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.735558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.735600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.735643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.735680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.735738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.735785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.735835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.735888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.735942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.735991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.736980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.737968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.738017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.738062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.738108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.738155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.738202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.738252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.738738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.738786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.738831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.738878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.738922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.738968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.270 [2024-07-25 10:22:44.739017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.739981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.740956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.741006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.741054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.741107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.741159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.741217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.741263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.741309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.741353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.741400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.741448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.741495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.741542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.742032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.742085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.742133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.742178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.742219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.742264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.742310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.742351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.742400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.742432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.271 [2024-07-25 10:22:44.742476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.742516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.742554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.742596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.742638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.742680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.742727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.742769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.742812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.742855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.742888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.742927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.742972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.743984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.744812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.745366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.745411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.745453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.745499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.745539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.745576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.745616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.745667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.745724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.745771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.745824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.745881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.745928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.745975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.272 [2024-07-25 10:22:44.746986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.747973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.748012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.748060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.748105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.748151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.748190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.748736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.748786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.748836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.748880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.748928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.748975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.749989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.750990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.751032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.751074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.751114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.751154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.751195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.751232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.751273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.751315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.751358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.751399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.751442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.751484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.751525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.751564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.752057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.273 [2024-07-25 10:22:44.752111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.752965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.753996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.754877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.755365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.755422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.755468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.755513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.755560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.755612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.755658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.755691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.755742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.755783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.755824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.755864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.755906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.274 [2024-07-25 10:22:44.755946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.755989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.756963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.757971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.758016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.758064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.758114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.758167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.758215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.758692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.758743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.758797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.758836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.758874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.758916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.758952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.758992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.759972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.275 [2024-07-25 10:22:44.760637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.760683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.760727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.760776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.760823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.760867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.760910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.760954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.760997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.761041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.761086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.761132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.761181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.761227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.761270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.761316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.761816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.761868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.761917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.761962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.762980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.763991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.764034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.764067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.764111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.764151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.764192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.764234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.764275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.764312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.764356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.764398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.276 [2024-07-25 10:22:44.764442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.764479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.764517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.764557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:41.277 [2024-07-25 10:22:44.765448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.765957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.766987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.767867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.768973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.769015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.769055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.769091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.769132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.769173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.769211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.769252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.769303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.769347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.769390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.769438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.769489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.769534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.277 [2024-07-25 10:22:44.769576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.769621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.769670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.769723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.769769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.769813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.769861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.769905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.769949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.769991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.770969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.771009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.771050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.771508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.771553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.771592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.771637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.771680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.771725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.771768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.771809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.771849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.771897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.771943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.771989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.772954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.773988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.774029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.774072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.774113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.278 [2024-07-25 10:22:44.774156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.774196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.774687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.774742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.774789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.774836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.774881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.774924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.774975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.775976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.776972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.777009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.777049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.777091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.777135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.777172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.777208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.777249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.777286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.777328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.777365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.777407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.777913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.777970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.778991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.779032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.779073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.779115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.779154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.779188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.779229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.779270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.279 [2024-07-25 10:22:44.779308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.779998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.780041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.780089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.780135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.780182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.780232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.780281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.780327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.780372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.780419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.780464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.780507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.780562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.780608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.780654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.781994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.782978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.783023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.783073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.783118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.783165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.783214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.280 [2024-07-25 10:22:44.783264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.783309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.783354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.783405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.783451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.783498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.783547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.783593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.783642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.783690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.783743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.783785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.783834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.783886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.783936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.784428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.784483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.784526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.784566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.784610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.784644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.784684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.784732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.784776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.784819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.784861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.784901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.784947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.784987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.785984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.786971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.787015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.787052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.787095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.787140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.787743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.787793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.787835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.787876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.787917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.787965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.788012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.788054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.788101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.788149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.788196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.788244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.788294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.788338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.788383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.788430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.788475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.788522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.788569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.788618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.281 [2024-07-25 10:22:44.788676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.788726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.788773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.788817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.788863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.788908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.788955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.789970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.790003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.790041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.790083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.790120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.790160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.790199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.790247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.790290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.790334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.790375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.790421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.790463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.790505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.790548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.790587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.791982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.792960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.793001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.793040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.793081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.793130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.793170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.793212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.793257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.793289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.793328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.282 [2024-07-25 10:22:44.793367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.793401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.793442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.793485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.793525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.793572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.793617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.793662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.793703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.793754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.794997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.795984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.796963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.797012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.797057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.797101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.797605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.797656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.797709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.797763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.797813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.797860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.797910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.797961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.798011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.798058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.798106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.798156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.798211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.798260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.798304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.798348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.798386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.798432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.798471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.798519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.798567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.798599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.283 [2024-07-25 10:22:44.798645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.798686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.798736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.798780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.798824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.798867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.798908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.798952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.798991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.799989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.800035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.800082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.800128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.800173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.800221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.800276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.800319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.800352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.800395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.800868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.800918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.800956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.800996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.801983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 true 00:07:41.284 [2024-07-25 10:22:44.802120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.802986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.803030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.803077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.803124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.803171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.803218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.803271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.803320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.803372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.803420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.284 [2024-07-25 10:22:44.803467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.803515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.803559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.803603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.803652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.803700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.803756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.804973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.805959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.806973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.807539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.807586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.807629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.807676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.807730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.807775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.807822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.807869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.807915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.807965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.808014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.285 [2024-07-25 10:22:44.808060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.808967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.809970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.810010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.810057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.810106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.810150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.810197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.810250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.810308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.810792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.810841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.810886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.810936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.810982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.811972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.286 [2024-07-25 10:22:44.812809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.812847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.812886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.812926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.812969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.813008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.813049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.813085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.813125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.813157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.813203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.813243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.813282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.813320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.813371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.813409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.813450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.813490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.813532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.814980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.815951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.816817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.817301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.817346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.817387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.817428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.817472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 Message suppressed 999 times: [2024-07-25 10:22:44.817507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 Read completed with error (sct=0, sc=15) 00:07:41.287 [2024-07-25 10:22:44.817554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.817602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.817651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.817698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.817749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.817801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.817851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.817903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.287 [2024-07-25 10:22:44.817954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.818958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.819979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.820021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.820058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.820098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.820136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.820182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.820676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.820726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.820765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.820803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.820843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.820883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.820925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.820970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.821954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.288 [2024-07-25 10:22:44.822754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.822804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.822848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.822893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.822945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.823003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.823049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.823095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.823142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.823190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.823236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.823280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.823327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.823372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.823422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.823478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.823528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.824988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.825957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:41.289 [2024-07-25 10:22:44.826432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.826733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.289 [2024-07-25 10:22:44.826782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.827254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.827300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.827330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.827366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.827408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.827450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.827491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.827536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.289 [2024-07-25 10:22:44.827575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.827619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.827656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.827696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.827743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.827781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.827819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.827870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.827916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.827961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.828992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.829956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.830436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.830479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.830528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.830568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.830613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.830655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.830687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.830731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.830783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.830823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.830866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.830902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.830944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.830984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.290 [2024-07-25 10:22:44.831696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.831747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.831792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.831840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.831887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.831930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.831978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.832961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.833002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.833041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.833080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.833125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.833298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.833651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.833696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.833748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.833792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.833832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.833875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.833912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.833952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.833991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.834989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.835975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.836010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.836047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.836087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.836130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.836166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.836203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.836242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.836281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.836773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.836823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.836870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.836915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.836959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.291 [2024-07-25 10:22:44.837008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.837959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.838987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.839028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.839070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.839115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.839158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.839195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.839235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.839269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.839315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.839356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.839394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.839440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.839480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.839518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.839688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.840970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.841999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.292 [2024-07-25 10:22:44.842040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.842647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.843989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.844984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.845922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.846099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.846442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.846490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.846529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.846568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.846608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.846647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.846684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.846728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.846772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.846816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.846864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.846904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.846948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.846993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.847037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.847082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.847132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.847176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.847217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.847262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.847302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.293 [2024-07-25 10:22:44.847345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.847389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.847430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.847473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.847519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.847562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.847606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.847652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.847695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.847747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.847792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.847831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.847872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.847917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.847964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.848999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.849040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.849569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.849621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.849676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.849730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.849779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.849824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.849871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.849919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.849962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.850984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.294 [2024-07-25 10:22:44.851736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.851783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.851824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.851867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.851913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.851957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.851999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.852046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.852091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.852134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.852183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.852226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.852266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.852309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.852490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.852863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.852912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.852959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.853983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.854960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.855006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.855044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.855083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.855129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.855169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.855204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.855245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.855280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.855317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.855356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.855399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.855440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.855481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.855982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.856995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.857034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.857076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.295 [2024-07-25 10:22:44.857117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.857993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.858934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.859966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.860980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.861991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.862025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.862496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.862532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.862574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.862614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.862654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.862695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.862743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.862787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.296 [2024-07-25 10:22:44.862823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.862862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.862903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.862943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.862984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.863965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.864995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.865042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.865087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.865135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.865184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.865228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.865409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.865760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.865807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.865847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.865887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.865929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.865970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.866011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.866055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.866093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.866138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.866183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.866232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.866280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.866330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.866378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.866426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.866472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.866515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.866560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.866604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.297 [2024-07-25 10:22:44.866649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.866698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.866754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.866801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.866846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.866892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.866937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.866985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.867978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.868023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.868063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.868105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.868150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.868198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.868239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.868288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.868329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.868367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.868402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.868443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.868481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.868521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:41.298 [2024-07-25 10:22:44.869075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.869973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.298 [2024-07-25 10:22:44.870842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.870889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.870935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.870979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.871889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.872082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.872429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.872472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.872514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.872556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.872597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.872640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.872681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.872731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.872775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.872811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.872849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.872896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.872934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.872979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.873977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.874972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.875018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.875050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.875093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.875631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.299 [2024-07-25 10:22:44.875678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.875728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.875769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.875811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.875857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.875902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.875951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.876987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.877966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.878008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.878045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.878083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.878120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.878153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.878193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.878233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.878276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.878316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.878356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.878397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.878436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.878603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.879953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.880001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.880049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.880095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.880149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.880197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.880242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.880289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.880335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.880380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.880419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.880452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.880495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.300 [2024-07-25 10:22:44.880536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.880577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.880625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.880664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.880719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.880760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.880804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.880846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.880893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.880936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.880968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.881864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.882344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.882395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.882443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.882488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.882532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.882582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.882631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.882678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.882731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.882776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.882815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.882861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.882904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.882937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.882976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.883992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.884961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.885001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.885043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.885086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.885133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.885301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.885665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.301 [2024-07-25 10:22:44.885711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.885757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.885800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.885842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.885883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.885928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.885970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.886988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.887957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.888005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.888055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.888103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.888155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.888205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.888256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.888306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.888355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.888401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.888879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.888931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.888985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.889982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.890026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.890068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.890109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.890156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.890198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.890241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.890279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.302 [2024-07-25 10:22:44.890317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.890357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.890401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.890441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.890486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.890536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.890575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.890621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.890663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.890707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.890759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.890795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.890835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.890874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.890920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.890962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.891873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.892560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.892608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.892649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.892688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.892734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.892775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.892810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.892852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.892899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.892943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.892992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.893996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.894042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.894093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.894141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.894186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.894232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.894279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.894323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.894368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.894424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.894470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.894516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.894562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.303 [2024-07-25 10:22:44.894610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.894656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.894702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.894751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.894803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.894856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.894902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.894951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.895961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.896995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.897959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.898964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.899012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.899057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.899106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.899152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.899200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.899245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.304 [2024-07-25 10:22:44.899293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.899340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.899389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.899435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.899616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.899662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.899709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.899765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.899809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.899857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.899907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.899962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.900994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.901703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.902987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.903963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.904012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.904044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.305 [2024-07-25 10:22:44.904088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.904987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.905171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.905218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.905268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.905315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.905356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.905399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.905440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.905479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.905522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.905566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.905608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.905651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.905686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.905731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.905768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.906468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.906511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.906557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.906603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.906649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.906700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.906758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.906803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.906849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.906898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.906947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.906994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.907952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.908973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.909014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.909054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.909097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.909143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.909185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.306 [2024-07-25 10:22:44.909225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.909264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.909305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.909350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.909535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.909576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.909620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.909662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.909699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.909749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.909788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.909826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.909867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.909909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.909954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.909997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.910964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.911653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.912974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.913974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.914022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.914067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.914116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.914163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.914206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.914243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.307 [2024-07-25 10:22:44.914286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.914993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.915839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.916515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.916567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.916617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.916667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.916710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.916765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.916811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.916855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.916901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.916949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.916996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.917991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.918030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.918070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.918111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.918150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.918197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.918236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.918279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.918320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.918362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.918404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.308 [2024-07-25 10:22:44.918444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.918490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.918532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.918574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.918618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.918665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.918712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.918763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.918812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.918863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.918909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.918956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.919959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:41.309 [2024-07-25 10:22:44.920050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.920990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.921988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.922033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.922081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.922128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.922179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.922233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.922744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.922793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.922846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.922891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.922939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.922983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.923031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.923076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.309 [2024-07-25 10:22:44.923123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.923966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.924961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.925008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.925054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.925100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.925144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.925191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.925238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.925284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.925343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.925398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.925455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.925499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.925548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.925740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.926090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.926135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.926185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.926228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.926264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.926311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.310 [2024-07-25 10:22:44.926353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.926395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.926441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.926480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.926522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.926561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.926603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.926643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.926690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.926748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.926800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.926843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.926884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.926922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.926964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.926997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.927038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.927075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.927118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.590 [2024-07-25 10:22:44.927164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.927970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.928845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.929344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.929392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.929439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.929486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.929536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.929590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.929648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.929695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.929751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.929798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.929845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.929893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.929937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.929984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.930950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.931980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.932021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.591 [2024-07-25 10:22:44.932062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.932107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.932149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.932186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.932226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.932414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.933978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.934980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.935965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.936970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.937013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.937063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.937113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.937159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.937206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.592 [2024-07-25 10:22:44.937253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.937298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.937347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.937396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.937452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.937498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.937547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.937597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.937644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.937693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.937743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.937786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.937835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.937879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.937922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.937969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.938951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.939973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.940955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.593 [2024-07-25 10:22:44.941813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.941850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.941888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.942378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.942427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.942484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.942531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.942586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.942633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.942681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.942738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.942782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.942830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.942876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.942922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.942970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.943956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.944995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.945042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.945083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.945126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.945172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.945219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.945262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.945294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.945341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.945522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.946974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.947004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.947047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.947087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.947129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.947168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.947207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.947248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.594 [2024-07-25 10:22:44.947287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.947337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.947387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.947449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.947499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.947547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.947594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.947649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.947697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.947744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.947787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.947835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.947869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.947909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.947951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.947996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.948959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.949997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.950968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.951011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.951059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.951098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.595 [2024-07-25 10:22:44.951146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.951906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.952999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.953989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.954971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.955467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.955515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.955564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.955606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.955649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.955689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.955745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.955789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.955831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.955877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.955918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.955952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.955997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.956037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.956079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.596 [2024-07-25 10:22:44.956122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.956996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.957958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.958006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.958066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.958119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.958168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.958218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.958266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.958315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.958363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.958411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.958586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.958948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.958997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.959970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.960989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.961032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.597 [2024-07-25 10:22:44.961070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.961113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.961146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.961186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.961223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.961268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.961314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.961354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.961394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.961438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.961482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.961528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.961570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.961612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.961655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.962954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.963975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.964965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.965003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.965039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.965227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.965597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.965648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.965695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.965751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.965803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.965849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.965899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.965946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.966006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.966050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.966099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.966147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.966196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.966243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.966298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.966347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.966396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.966441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.598 [2024-07-25 10:22:44.966486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.966518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.966560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.966601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.966640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.966680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.966733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.966773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.966812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.966855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.966896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.966938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.966976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.967980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.968033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.968081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.968132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.968182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.968229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.968279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.968328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.968378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.968427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.968919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.968965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.969992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.599 [2024-07-25 10:22:44.970757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.970803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.970851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.970898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.970945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.970990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.971851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:41.600 [2024-07-25 10:22:44.972276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.972997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.973985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.974999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.975506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.975559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.975606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.975651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.975696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.975735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.975779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.975825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.975866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.600 [2024-07-25 10:22:44.975907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.975944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.975991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.976988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.977988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.978034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.978081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.978129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.978178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.978227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.978273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.978320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.978501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.978867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.978906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.978951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.978997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.979994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.980034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.980076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.980114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.980164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.980206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.980245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.980287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.980332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.980376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.980420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.980467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.980521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.980567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.980611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.601 [2024-07-25 10:22:44.980661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.980709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.980761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.980809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.980857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.980904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.980948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.980996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.981043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.981088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.981133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.981178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.981233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.981278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.981310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.981350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.981395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.981449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.981486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.981525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.981566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.982999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.983967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.984973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:44.985150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.602 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.602 [2024-07-25 10:22:45.181381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:45.181437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:45.181474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:45.181513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:45.181553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:45.181589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:45.181630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.602 [2024-07-25 10:22:45.181669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.181707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.181761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.181814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.181860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.181901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.181943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.181998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.182977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.183990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.184049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.184552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.184602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.184649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.184693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.184743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.184786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.184833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.184876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.184923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.184968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.185981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.186023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.186059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.186097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.186138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.186177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.186214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.186256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.186298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.603 [2024-07-25 10:22:45.186336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.186377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.186415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.186455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.186499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.186543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.186590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.186633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.186677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.186722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.186770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.186814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.186854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.186892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.186936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.186984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.187026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.187071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.187118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.187161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.187204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.187250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.187706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.187760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.187801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.187836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.187883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.187923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.187961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.188986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.189970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.190010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.190060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.190104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.190146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.190188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.604 [2024-07-25 10:22:45.190224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.190263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.190311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.190356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.190819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.190857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.190896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.190942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.190982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.191979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.192995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.193042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.193085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.193125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.193169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.193201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.193241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.193279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.193315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.193349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.193391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.193430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.193466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.193513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.193550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:41.605 [2024-07-25 10:22:45.194021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.194998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.195041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.195083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.195128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.195169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.195210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.605 [2024-07-25 10:22:45.195259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.195994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.196734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.197996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.198999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.606 [2024-07-25 10:22:45.199825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.199872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.200335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.200390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.200431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.200474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.200520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.200564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.200610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.200657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.200700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.200753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.200801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.200849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.200895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.200939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.200987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.201968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.202975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.203468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.203523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.203574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.203622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.203671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.203721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.203768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.203816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.203863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.203913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.203961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.607 [2024-07-25 10:22:45.204944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.204983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.205967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.206013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.206060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.206118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.206164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.206211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.206257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.206300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.206491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.206861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.206904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.206943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.206981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.207976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.208952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.209002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:41.608 [2024-07-25 10:22:45.209052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.209096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.209140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.209178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.209213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.209246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.209289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.209327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.209368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:41.608 [2024-07-25 10:22:45.209413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.209455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.209502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.209540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.608 [2024-07-25 10:22:45.210434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.210486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.210530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.210581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.210624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.210671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.210720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.210765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.210811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.210853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.210901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.210943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.210984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.211998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.212997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.213980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.214017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.214057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.214099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.214140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.214183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.214223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.609 [2024-07-25 10:22:45.214263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.214981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.215982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.216019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.216593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.216635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.216680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.216726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.216766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.216806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.216847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.216888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.216935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.216978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.217974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.610 [2024-07-25 10:22:45.218855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.218897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.218936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.218978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.219018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.219049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.219087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.219131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.219173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.219213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.219262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.219301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.219857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.219901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.219939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.219986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.220973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.221988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.222026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.222068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.222114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.222152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.222189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.222223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.222263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.222301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.222340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.222386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.222429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.222471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.222507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.222548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.223998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.224045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.611 [2024-07-25 10:22:45.224076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.224986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.225767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.226998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.227972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.612 [2024-07-25 10:22:45.228613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.228661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.228702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.228753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.228802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.228847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.228889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.229375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.229424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.229468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.229514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.229560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.229608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.229651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.229697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.229745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.229785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.229818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.229859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.229899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.229941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.229980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.230967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.231966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.232012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.232060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.232108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.232278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.232911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.232959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.232999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.233041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.233086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.233122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.233161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.233206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.233251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.613 [2024-07-25 10:22:45.233296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.233341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.233383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.233424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.233464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.233502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.233542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.233589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.233634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.233678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.233735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.233785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.233831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.233879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.233923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.233970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.234990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.235995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.236983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.614 [2024-07-25 10:22:45.237845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.237897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.237945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.237990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.238040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.238086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.238133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.238182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.238231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.238287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.238332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.238378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.238421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.238473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.238514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.238565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.238597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.238637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.238680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.239951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.240960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.615 [2024-07-25 10:22:45.241757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.241795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.241836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.241878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.242062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.242424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.242472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.242519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.242562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:41.616 [2024-07-25 10:22:45.242610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.242657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.242699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.242754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.242805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.242854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.242899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.242944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.242990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.243951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.244990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.245030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.245072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.245118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.245157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.245203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.245244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.245722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.245772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.245815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.245856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.245898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.245943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.245984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.616 [2024-07-25 10:22:45.246817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.246861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.246907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.246951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.246998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.247966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.248007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.248052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.248105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.248152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.248205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.248248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.248297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.248347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.248392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.248440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.248485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.248532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.248721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.249975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.250993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.251034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.251074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.251118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.251161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.251201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.251247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.251294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.617 [2024-07-25 10:22:45.251342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.251392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.251437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.251485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.251527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.251577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.251624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.251671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.251723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.251769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.251816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.251879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.252980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.253973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.618 [2024-07-25 10:22:45.254730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.254776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.254822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.254865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.254911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.254961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.255012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.255059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.255106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.255154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.255199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.255685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.255740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.255788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.255827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.255869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.255906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.255946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.255977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.256972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.257982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.258027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.258073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.258112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.258154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.258199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.258241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.258282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.258322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.258365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.258829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.258878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.258922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.258969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.619 [2024-07-25 10:22:45.259775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.259819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.259867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.259913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.259959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.260988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.261628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.262956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.620 [2024-07-25 10:22:45.263792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.263833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.263873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.263916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.263964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.264955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.265443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.265491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.265540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.265588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.265635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.265680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.265740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.265791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.265842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.265890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.265936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.265985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.266990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.267949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.268000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.268047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.268092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.268136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.268185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.268229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.268276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.268770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.268825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.268875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.268925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.268976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.621 [2024-07-25 10:22:45.269022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.269963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.270999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.271047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.271093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.271140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.271183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.271236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.271286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.271333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.271380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.271431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.271480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.271525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.622 [2024-07-25 10:22:45.272768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.272810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.272852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.272895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.272936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.272969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.273958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.274822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.623 [2024-07-25 10:22:45.275006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.275982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.276944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.897 [2024-07-25 10:22:45.277000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.277957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.278001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.278043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.278090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.278564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.278611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.278658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.278705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.278757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.278812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.278856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.278903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.278948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.278998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.279970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.280978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.281019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.281063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.281104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.281145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.281186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.281223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.281267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.898 [2024-07-25 10:22:45.281318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.281364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.281547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.281907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.281951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.281990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.282990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.283955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.284000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.284040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.284078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.284122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.284160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.284205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.284242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.284285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.284327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.284367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.284408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.284453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.284489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.284538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.285958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.286009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.899 [2024-07-25 10:22:45.286060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.286982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.287833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.288993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.289956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.290003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.900 [2024-07-25 10:22:45.290047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.290977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.291028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.291069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.291532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.291576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.291621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.291666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.291710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.291768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.291816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.291863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.291910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.291954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.292982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.293972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.294016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.294064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.294105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.294150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.294181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.294225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.294269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.294308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.294350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.294393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.294580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.294929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.294982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:41.901 [2024-07-25 10:22:45.295028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.295072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.295107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.295152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.901 [2024-07-25 10:22:45.295194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.295985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.296982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.297039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.297086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.297135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.297184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.297234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.297286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.297332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.297380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.902 [2024-07-25 10:22:45.297428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.297476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.297522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.297570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.298975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.299957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.300885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.301074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.301419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.301472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.301518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.301566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.301610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.301653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.301691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.301734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.301778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.301821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.301863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.301903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.301937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.301977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.903 [2024-07-25 10:22:45.302624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.302668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.302718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.302766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.302812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.302859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.302905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.302951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.303984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.304033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.304076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.304726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.304785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.304835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.304881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.304931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.304982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.305986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.306952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.307004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.307042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.307090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.307132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.307177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.307227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.307260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.307302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.307343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.307382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.307425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.904 [2024-07-25 10:22:45.307472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.307515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.307558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.307599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.307784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.308995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.309972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.310664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.311970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.312018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.312070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.312117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.312167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.312215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.312265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.312311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.312362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.312410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.312461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.312511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.312556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.905 [2024-07-25 10:22:45.312601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.312648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.312699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.312751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.312798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.312845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.312892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.312936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.312985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.313966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.314003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.314051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.314090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.314275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.314630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.314674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.314713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.314763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.314804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.314842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.314873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.314918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.314960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.314990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.315963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.316991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.317038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.906 [2024-07-25 10:22:45.317082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.317129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.317619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.317668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.317710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.317766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.317809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.317841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.317879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.317922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.317961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.318987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.319974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.320022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.320067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.320110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.320155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.320199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.320242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.320283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.320327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.320369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.320412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.320451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.320483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.320659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.320999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.321048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.321091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.321137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.321172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.321212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.321259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.321300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.321341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.321382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.321425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.321468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.907 [2024-07-25 10:22:45.321509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.321549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.321594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.321635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.321682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.321720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.321762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.321805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.321837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.321875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.321915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.321945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.321975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.322991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.323039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.323095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.323140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.323187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.323236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.323281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.323328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.323374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.323421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.323470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.323517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.323567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.324970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.325951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.326002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.326055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.326111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.908 [2024-07-25 10:22:45.326155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.326919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.327105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.327443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.327487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.327525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.327567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.327610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.327651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.327699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.327741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.327786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.327824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.327866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.327908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.327949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.327986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.328963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.329956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.330000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.330043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.330085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.330130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.330174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.330653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.330700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.330756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.330813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.330860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.330909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.330957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.331007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.331050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.331098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.331148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.331196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.331242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.331286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.331333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.331382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.331436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.909 [2024-07-25 10:22:45.331485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.331534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.331580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.331627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.331671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.331719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.331766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.331824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.331875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.331923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.331970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.332990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.333033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.333072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.333118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.333160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.333203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.333249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.333296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.333345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.333393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.333443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.333488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.333537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.333725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.334967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.335983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.336032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.336079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.336125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.336177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.336224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.910 [2024-07-25 10:22:45.336270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.336315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.336364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.336413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.336457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.336497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.336542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.336583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.336625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.336670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.336711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.336751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.336792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.337982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.338958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.339961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.340001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.340056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.340098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.340140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.340189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.340222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.340263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.340439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.340783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.340828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.340872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.340918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.340962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.341003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.341048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.341088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.341131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.341174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.341210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.341259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.341304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.341350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.341400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.341450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.341497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.341544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.911 [2024-07-25 10:22:45.341593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.341641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.341687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.341735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.341780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.341826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.341873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.341920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.341970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.342967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.343011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.343058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.343099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.343137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.343184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.343223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.343272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.343309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.343352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.343393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.343434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.343479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.343523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.344967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.912 [2024-07-25 10:22:45.345773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.345820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.345868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.345919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.345967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.346914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.347094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.347424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.347467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:41.913 [2024-07-25 10:22:45.347509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.347542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.347584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.347633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.347673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.347708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.347750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.347791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.347833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.347874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.347916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.347961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.348969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.349966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.350015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.350071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.350120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.350610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.350659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.350708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.350761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.350807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.350856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.350908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.350958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.351008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.351055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.913 [2024-07-25 10:22:45.351104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.351961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.352957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.353005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.353046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.353090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.353127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.353168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.353208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.353252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.353295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.353344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.353390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.353438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.353632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.353683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.354960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.914 [2024-07-25 10:22:45.355900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.355947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.355996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.356721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.357980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.358952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.359996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.360042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.360083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.360131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.360170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.360345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.360389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.360805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.360851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.360888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.360930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.360969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.361008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.361058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.361103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.361148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.361194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.361239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.915 [2024-07-25 10:22:45.361285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.361330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.361376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.361423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.361471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.361518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.361561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.361606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.361654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.361699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.361752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.361798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.361833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.361879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.361920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.361966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.362989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.363042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.363093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.363141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.363186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.363235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.363282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.363334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.363379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.363432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.363480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.364969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.365017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.365062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.365106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.365146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.365186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.365231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.365263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.365308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.365352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.365390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.365429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.365471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.916 [2024-07-25 10:22:45.365512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.365550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.365591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.365630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.365678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.365727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.365780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.365829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.365878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.365927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.365972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.366843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.367016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.367060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.367410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.367453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.367493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.367537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.367587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.367625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.367668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.367709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.367763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.367811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.367869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.367916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.367963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.368965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.369964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.370008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.370054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.370102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.370607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.370657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.370703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.370756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.917 [2024-07-25 10:22:45.370804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.370851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.370896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.370945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.370997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 true 00:07:41.918 [2024-07-25 10:22:45.371676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.371988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.372965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.373011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.373058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.373106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.373157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.373209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.373254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.373299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.373347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.373395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.373440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.373619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.373668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.374987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.375025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.375063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.375108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.375150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.375191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.375231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.375272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.375310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.375355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.375398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.918 [2024-07-25 10:22:45.375443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.375483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.375522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.375566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.375610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.375652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.375699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.375748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.375796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.375843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.375890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.375940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.375989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.376740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.377959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.378968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.379992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.380041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.380094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.380143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.380190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.380365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.380702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.380752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.380792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.380836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.919 [2024-07-25 10:22:45.380884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.380927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.380967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.381997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.382990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.383028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.383068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.383113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.383154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.383196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.383236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.383276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.383315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.383356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.383398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.383946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.383998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.384960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.920 [2024-07-25 10:22:45.385783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.385824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.385866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.385907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.385950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.385992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.386971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.387317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.387369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.387417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.387461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.387507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.387557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.387612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.387663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.387712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.387763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.387806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.387855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.387904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.387947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.387995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.388986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.921 [2024-07-25 10:22:45.389705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.389757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.389814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.389859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.389901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.389939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.389981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.390024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.390072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.390573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.390622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.390673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.390724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.390774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.390822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.390857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.390897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.390936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.390984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.391983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.392987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.393034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.393086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.393136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.393184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.393232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.393282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.393329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.393379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.393432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.393617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.393963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.922 [2024-07-25 10:22:45.394988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.395962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.396746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.397335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.397381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.397421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:41.923 [2024-07-25 10:22:45.397467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.397509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.397557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.397603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.397658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.397705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.397754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.397805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.923 [2024-07-25 10:22:45.397852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.397908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:41.923 [2024-07-25 10:22:45.397953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.397999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.398987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.923 [2024-07-25 10:22:45.399856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.399900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.399939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.399981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.400027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.400070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.400112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.400155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.400196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.400237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.400439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.400794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.400841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.400891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.400940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.400989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.401962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.402968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.403013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.403053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.403102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.403139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.403177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.403215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.403255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.403301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.403343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.403385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.403424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.403466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.403507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.403547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:41.924 [2024-07-25 10:22:45.404056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:42.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.862 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.121 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:43.121 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:43.381 true 00:07:43.381 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:43.381 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.318 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.318 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:44.318 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:44.577 true 00:07:44.577 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:44.577 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.577 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.836 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:44.836 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:45.095 true 00:07:45.095 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:45.095 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.095 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.365 [2024-07-25 10:22:48.966083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.966974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.967010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.967052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.967092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.967122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.967153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.967182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.967213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.967254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.967296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.967336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.967385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.967427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.967465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.365 [2024-07-25 10:22:48.967504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.967544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.967585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.967629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.967672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.967724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.967769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.967817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.967865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.967916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.967963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.968726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.969954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.970984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.366 [2024-07-25 10:22:48.971031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.971978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.972025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.972475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.972516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.972561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.972603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.972643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.972683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.972734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.972776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.972815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.972858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.972901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.972941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.972982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.973991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.974035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.974078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.974124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.367 [2024-07-25 10:22:48.974172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.974988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.975035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.975081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.975125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.975172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.975218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.975265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.975743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.975787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.975828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.975870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.975904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.975943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.975988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.976987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.977033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.977078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.977123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.977169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.977218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.977264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.977313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.977361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.977408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.977454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.368 [2024-07-25 10:22:48.977498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.977545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.977596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.977639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.977686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.977737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.977781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.977829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.977876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.977924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.977978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.978023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.978070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.978117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.978163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.978215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.978261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.978304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.978355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.978399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.978431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.978478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.978515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.978976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.979999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.369 [2024-07-25 10:22:48.980819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.980858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.980899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.980940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.980979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.981696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.982971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.983993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.984046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.984088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.984130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.984164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.984202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.370 [2024-07-25 10:22:48.984243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.984975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.985461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.985511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.985555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.985602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.985648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.985692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.985735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.985778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.985822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.985862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.985902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.985943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.985995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.986997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.987042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.987088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.987135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.987178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.987224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.987271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.987317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.987364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.987408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.371 [2024-07-25 10:22:48.987452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.987501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.987549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.987596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.987643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.987694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.987746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.987792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.987841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.987886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.987934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.987981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.988026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.988074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.988119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.988164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.988211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.988259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.988753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.988801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.988846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.988888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.988926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.988966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.989995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.990040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.990083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.990127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.990170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.372 [2024-07-25 10:22:48.990209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.990954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.991000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.991046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.991091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.991136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.991181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.991226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.991275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.991324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.991365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.991398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.991441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.991906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.991955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.991994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.992972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.993959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.994011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.994066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.994111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.994154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.994201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.994250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.994295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.994341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.994383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.994423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.994468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.373 [2024-07-25 10:22:48.994500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.994543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.994584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.994623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.994669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.994711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.995276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.995319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.995364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.995406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.995443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.995486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.995530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.995574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.995622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.995671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.995724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.995771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.995821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:45.374 [2024-07-25 10:22:48.995870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.995916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.995965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:45.374 [2024-07-25 10:22:48.996203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.996980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.997976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.998016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.998055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.998104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.998576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.998619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.998664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.998705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.998757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.998801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.998843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.998890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.998941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.998985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.374 [2024-07-25 10:22:48.999794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:48.999841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:48.999892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:48.999940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:48.999994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.000977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.001021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.001060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.001104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.001143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.001182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.001221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.001261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.001306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.001347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.001394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.001900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.001952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.001999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.002979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.003989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.004038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.004085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.004134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.004179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.004226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.004272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.004320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.004365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.004407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.004452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.004496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.375 [2024-07-25 10:22:49.004547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.004602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.004648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.004698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:45.376 [2024-07-25 10:22:49.005476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.005976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.006951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.007956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.008447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.008493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.008533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.008573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.008613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.008657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.008702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.008755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.008803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.008849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.008896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.008943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.008991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.009041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.009086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.009133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.009183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.009227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.009275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.009312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.009354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.009397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.009439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.009472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.376 [2024-07-25 10:22:49.009517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.009560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.009597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.009637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.009680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.009729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.009770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.009812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.009853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.009894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.009935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.009977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.010971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.011018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.011064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.011112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.011159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.011206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.011254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.011751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.011800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.011845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.011897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.011946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.011990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.012037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.012083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.012130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.012181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.377 [2024-07-25 10:22:49.012232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.012991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.013969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.014015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.014056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.014097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.014138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.014178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.014222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.014263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.014306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.014347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.014395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.014437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.014485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.014534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.015028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.015076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.015111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.015152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.015194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.015237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.015278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.015319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.015361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.015402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.015434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.378 [2024-07-25 10:22:49.015475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.015517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.015559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.015597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.015640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.015678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.015709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.015745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.015782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.015828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.015872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.015913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.015955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.015997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.016971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.017679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.018151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.018200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.018243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.018289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.018334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.018379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.018425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.018468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.018516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.018562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.018610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.018656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.018702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.018754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.379 [2024-07-25 10:22:49.018802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.018854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.018900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.018944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.018991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.019983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.020928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.021394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.021439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.021480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.021533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.021571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.021618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.021658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.021701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.021752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.021785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.021829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.021872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.021914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.021955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.021999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.022045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.380 [2024-07-25 10:22:49.022088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.022974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.023966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.024018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.024060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.024106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.024155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.024660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.024709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.024760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.024804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.024849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.024895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.024941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.024990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.025038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.025085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.025129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.025176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.025223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.025271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.025319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.025364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.025410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.381 [2024-07-25 10:22:49.025450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.025494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.025536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.025586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.025631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.025672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.025713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.025759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.025791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.025832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.025877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.025918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.025958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.025999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.026977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.027021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.027067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.027114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.027158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.027206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.027255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.027301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.027350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.027398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.027452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.027937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.027981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.382 [2024-07-25 10:22:49.028729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.028787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.028838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.028882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.028931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.028982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.029957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.030737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.383 [2024-07-25 10:22:49.031949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.031989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.032990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.033999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.034456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.034499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.034539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.034582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.034620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.034663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.034707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.034754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.034796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.034839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.384 [2024-07-25 10:22:49.034877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.034920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.034968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.035993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.036999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.037043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.037086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.037133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.037173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.037213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.037250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.037726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.037762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.037800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.037841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.037880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.037920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.037964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.038954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.039000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.039045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.039086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.385 [2024-07-25 10:22:49.039126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.039985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.040033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.040075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.040122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.040180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.040226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.040273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.040322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.040370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.040416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.040463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.040507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.040547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.041970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.042958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.386 [2024-07-25 10:22:49.043673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.043719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.043760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.043803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.044953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.045958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.046968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.047008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.047044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.047086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.047622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.047672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.047722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.047767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.047814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.047860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.047905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.047951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.047999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.387 [2024-07-25 10:22:49.048917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.048966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.049972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.050017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.050062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.050107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.050153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.050199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.050253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.050297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.050342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.050390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.050440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.050930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.050980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.051966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.052957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.053005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.053051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.053086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.053126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.053170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.053210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.053251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.053293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.053334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.053371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.053412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.053451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.053493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.053526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.388 [2024-07-25 10:22:49.053565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.053607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.054975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.055968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.056936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.057405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.057454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:45.389 [2024-07-25 10:22:49.057501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.057551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.057600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.057648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.057695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.057744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.389 [2024-07-25 10:22:49.057788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.675 [2024-07-25 10:22:49.057836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.675 [2024-07-25 10:22:49.057883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.675 [2024-07-25 10:22:49.057929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.675 [2024-07-25 10:22:49.057964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.675 [2024-07-25 10:22:49.058004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.675 [2024-07-25 10:22:49.058052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.675 [2024-07-25 10:22:49.058090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.058967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.059963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.060013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.060061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.060109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.060158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.060209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.060700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.060754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.060804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.060849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.060895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.060941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.060985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.676 [2024-07-25 10:22:49.061763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.061805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.061848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.061889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.061928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.061975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.062981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.063027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.063076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.063122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.063168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.063213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.063260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.063310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.063356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.063402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.063451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.063937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.063983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.064966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.065980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.066029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.066075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.066118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.677 [2024-07-25 10:22:49.066158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.066197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.066237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.066277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.066321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.066373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.066414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.066468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.066508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.066560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.066592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.066638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.066677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.067974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.068991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.069987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.070029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.070076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.070118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.070579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.070622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.070664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.070705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.070749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.070792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.070830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.070870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.070917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.070964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.071016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.071063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.071110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.071164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.071208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.071256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.071303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.071347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.678 [2024-07-25 10:22:49.071393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.071438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.071485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.071527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.071574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.071628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.071675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.071729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.071780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.071827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.071875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.071922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.071969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.072992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.073033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.073074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.073116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.073155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.073198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.073238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.073277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.073316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.073357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.073862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.073910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.073961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.074967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.075971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.076018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.076063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.076106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.679 [2024-07-25 10:22:49.076150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.076198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.076247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.076296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.076342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.076390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.076434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.076482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.076530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.076574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.076622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.077975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.078996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.079847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.080320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.080364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.080405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.680 [2024-07-25 10:22:49.080446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.080489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.080532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.080572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.080614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.080657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.080696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.080742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.080796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.080841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.080889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.080939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.080989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.081978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.082969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.083018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.083065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.083112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.083149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.083600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.083650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.083690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.083740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.083781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.083813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.083862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.083906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.083943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.083984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.084988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.085037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.085087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.085136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.085183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.681 [2024-07-25 10:22:49.085234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.085988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.086031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.086073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.086114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.086146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.086190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.086239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.086278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.086326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.086365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.086412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.087979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.088961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.089832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.090372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.090414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.090455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.090498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.090551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.090603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.682 [2024-07-25 10:22:49.090650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.090693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.090745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.090795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.090839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.090888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.090932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.090983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.091987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.092996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.093043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.093091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.093137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.093182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.093227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.093722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.093774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.093831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.093881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.093929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.093969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.683 [2024-07-25 10:22:49.094743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.094784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.094823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.094866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.094905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.094949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.094990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.095984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.096028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.096075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.096123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.096165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.096203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.096238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.096278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.096318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.096361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.096404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.096444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.096919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.096966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.097960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.098010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.098060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.098107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.098157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.098202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.098251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.098300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.098344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.684 [2024-07-25 10:22:49.098389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.098433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.098479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.098515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.098555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.098598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.098636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.098676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.098727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.098771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.098817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.098857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.098900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.098935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.098979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.099669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.100979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.101975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.102022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.102062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.102104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.102144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.102189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.685 [2024-07-25 10:22:49.102233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.102958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.103438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.103494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.103541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.103592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.103640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.103689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.103742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.103795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.103843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.103889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.103931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.103972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.104961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.105964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.106011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.106060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.106109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.106157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.106204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.106251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.106300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.106347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.106829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.106884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.106931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.106972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.107021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.107062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.107104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.107151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.686 [2024-07-25 10:22:49.107184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.107954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.108964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.109009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.109057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.109105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.109151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.109195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.109232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.109274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.109317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.109359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.109399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.109432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.109476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.109515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.109558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:45.687 [2024-07-25 10:22:49.110143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.110970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.111989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.112039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.112087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.112133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.687 [2024-07-25 10:22:49.112183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.112984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.113025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.113076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.113116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.113581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.113631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.113671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.113726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.113769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.113810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.113855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.113897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.113938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.113980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.114958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.115957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.116002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.116045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.116088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.116128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.116169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.116211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.116252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.116297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.116337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.116822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.116873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.116919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.116964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.117012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.117056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.117102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.117153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.117202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.117251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.117305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.117351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.117401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.688 [2024-07-25 10:22:49.117446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.117493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.117540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.117587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.117632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.117679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.117734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.117786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.117843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.117890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.117937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.117984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.118963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.119679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.120955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.121966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.689 [2024-07-25 10:22:49.122013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.122980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.123025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.123495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.123547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.123593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.123642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.123691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.123764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.123820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.123865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.123914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.123967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.124999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.690 [2024-07-25 10:22:49.125855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.125905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.125959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.126002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.126047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.126095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.126143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.126189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.126239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.126285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.126334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.126384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.126868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.126912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.126960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.127961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.128956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.129001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.129048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.129094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.129138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.129184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.129234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.129281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.129323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.129363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.129406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.129447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.129491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.129547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.129588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.129631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.691 [2024-07-25 10:22:49.130974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.131990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.132974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.133007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.133472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.133517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.133562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.133603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.133648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.133691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.133737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.133781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.133821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.133861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.133905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.133952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.133999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.134976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.135015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.135066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.135100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.135142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.135177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.135221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.135261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.135305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.135348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.135391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.135434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.692 [2024-07-25 10:22:49.135474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.135512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.135555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.135605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.135655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.135704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.135758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.135805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.135852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.135901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.135947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.135995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.136044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.136094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.136141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.136189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.136238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.136289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.136779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.136828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.136877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.136923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.136974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.137981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.138981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.139032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.139081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.139130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.139175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.139225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.139271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.139318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.139363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.139398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.139436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.139480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.139520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.139566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.140055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.140108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.140149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.140193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.140237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.140278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.140323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.140364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.140404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.140452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.140500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.140547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.140593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.693 [2024-07-25 10:22:49.140638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.140685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.140739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.140794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.140841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.140887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.140935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.140981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.141972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.142870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.143346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.143396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.143445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.143492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.143538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.143586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.143632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.143678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.143730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.143781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.143828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.143874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.143923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.143975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.144960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.145001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.145041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.145080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.145125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.145162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.145202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.694 [2024-07-25 10:22:49.145245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.145984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.146025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.146069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.146113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.146160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.146649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.146700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.146752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.146799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.146847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.146899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.146946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.146995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.147990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.148960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.149007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.149058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.149108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.149156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.149203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.149250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.149299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.149350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.695 [2024-07-25 10:22:49.149398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.149447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.149496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.149961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.150993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.151970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.152732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.153965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.154011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.154062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.154107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.154158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.154203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.154247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.154290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.154337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.154384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.154434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.696 [2024-07-25 10:22:49.154484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.154528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.154581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.154629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.154676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.154719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.154762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.154813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.154855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.154901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.154942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.154977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.155993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.156049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.156526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.156578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.156622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.156668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.156720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.156783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.156827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.156867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.156903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.156944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.156987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.157983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.158971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.697 [2024-07-25 10:22:49.159018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.159064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.159111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.159148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.159184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.159219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.159264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.159305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.159825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.159865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.159905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.159946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.159985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.160991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.161983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.162020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.162068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.162110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.162153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.162196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.162230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.162269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.162311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.162360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.162398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.162439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.162481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.162521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:45.698 [2024-07-25 10:22:49.163013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.163950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.164000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.164048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.164097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.164148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.164193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.164240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.698 [2024-07-25 10:22:49.164288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.164978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.165862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.166370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.166423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.166483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.166530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.166576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.166625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.166657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.166696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.166745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.166788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.166833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.166878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.166917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.166969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 true 00:07:45.699 [2024-07-25 10:22:49.167484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.167979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.168981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.169024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.699 [2024-07-25 10:22:49.169064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.169115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.169156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.169629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.169673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.169721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.169755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.169795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.169835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.169876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.169920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.169964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.170949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.171991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.172031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.172064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.172104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.172148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.172189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.172232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.172277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.172319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.172366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.172861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.172911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.172960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.173008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.173058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.173104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.173154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.173200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.173243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.173295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.173341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.173386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.173431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.173479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.173524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.173571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.700 [2024-07-25 10:22:49.173622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.173666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.173718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.173766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.173812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.173860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.173905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.173950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.173994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.174977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.175651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.176966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.177998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.178042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.178088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.178131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.178176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.178221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.178267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.178314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.178363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.701 [2024-07-25 10:22:49.178411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.178460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.178504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.178550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.178596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.178649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.178698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.178749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.178801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.178847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.178891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.178934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.178984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.179460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.179507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.179551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.179592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.179638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.179671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.179713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.179756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.179796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.179839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.179882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.179920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.179963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.180996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.181996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.182040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.182080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.182119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.182155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.182200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.182685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.182734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.182777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.182821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.182865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.182908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.182950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.182996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.183043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.183086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.183130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.183176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.183225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.183274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.183320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.183363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.183415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.183464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.183513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.183556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.183604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.702 [2024-07-25 10:22:49.183650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.183697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.183751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.183797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.183841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.183890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.183938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.183984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.184972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.185014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.185065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.185110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.185143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.185182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.185224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.185263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.185303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.185342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.185384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.185425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.185465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.185963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.186963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.187995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.188037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.188078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.188124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.188164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.188198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.188236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.188281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.188326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.703 [2024-07-25 10:22:49.188367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.188418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.188459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.188511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.188544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.188581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.188622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.188663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.188704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.189998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:45.704 [2024-07-25 10:22:49.190657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.190996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.704 [2024-07-25 10:22:49.191045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.191995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.192043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.192086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.192558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.192605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.192647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.192686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.192736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.192786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.192826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.192868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.704 [2024-07-25 10:22:49.192907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.192945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.192987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.193960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.194992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.195024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.195061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.195102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.195142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.195183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.195228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.195271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.195313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.195809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.195863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.195909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.195958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.196999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.705 [2024-07-25 10:22:49.197757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.197797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.197839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.197882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.197924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.197965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.198004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.198040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.198092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.198138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.198185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.198236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.198285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.198336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.198381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.198429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.198472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.198518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.198562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.198608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.198655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.199969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.200980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.201887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.202372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.202415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.202460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.202503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.202546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.202593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.202636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.202675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.202712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.202754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.202798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.202844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.202890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.202933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.706 [2024-07-25 10:22:49.202979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.203967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.204982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.205024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.205065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.205106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.205144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.205184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.205646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.205695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.205741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.205782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.205821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.205859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.205899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.205941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.205982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.206970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.707 [2024-07-25 10:22:49.207727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.207771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.207812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.207852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.207893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.207942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.207984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.208024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.208079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.208124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.208169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.208217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.208265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.208317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.208364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.208410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.208901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.208950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.209986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.210969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.211017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.211067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.211114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.211159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.211209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.211255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.211301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.211347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.211393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.211441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.211489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.211532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.211564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.211610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.211651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:45.708 [2024-07-25 10:22:49.212710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.708 [2024-07-25 10:22:49.212904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.212952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.213966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.214917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.215397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.215445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.215489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.215534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.215581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.215622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.215668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.215719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.215765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.215818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.215870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.215916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.215962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.216974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.709 [2024-07-25 10:22:49.217015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.217985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.218025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.218064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.218102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.218144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.218603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.218647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.218683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.218712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.218749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.218778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.218817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.218857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.218898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.218940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.218983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.219965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.220990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.221039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.221087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.221131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.221174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.221224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.221275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.221320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.221799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.221842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.221890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.221933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.221974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.222020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.222060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.710 [2024-07-25 10:22:49.222097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.222981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.223959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.224005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.224057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.224095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.224128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.224166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.224204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.224248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.224288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.224330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.224371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.224417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.224455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.224504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.224542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.224713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.225963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.711 [2024-07-25 10:22:49.226918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.226953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.226994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.227883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.228365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.228410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.228456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.228506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.228554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.228602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.228644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.228694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.228748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.228800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.228847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.228892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.228937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.228983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.229976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.230981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.231021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.231063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.231106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.231149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.231193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.231693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.231747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.231790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.231837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.231886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.231935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.231985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.232028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.232059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.232107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.712 [2024-07-25 10:22:49.232149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.232984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.233988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.234036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.234087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.234135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.234180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.234228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.234275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.234323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.234382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.234426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.234475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.234518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.234981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.235989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.236034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.236081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.236131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.236175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.713 [2024-07-25 10:22:49.236222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.236992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.237704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.238991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.239956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.240983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.241023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.714 [2024-07-25 10:22:49.241062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.241101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.241673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.241720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.241763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.241801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.241841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.241882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.241930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.241974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.242978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.243954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.244002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.244045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.244090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.244132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.244164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.244204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.244252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.244293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.244336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.244377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.244424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.244464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.244954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.245978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.246022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.246070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.246121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.246173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.246220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.246267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.246314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.246359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.246403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.246450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.715 [2024-07-25 10:22:49.246492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.246538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.246571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.246611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.246649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.246690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.246733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.246775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.246820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.246862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.246914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.246958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.246996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.247736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.248999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.249957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.250999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.251051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.251526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.251565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.716 [2024-07-25 10:22:49.251602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.251648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.251689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.251736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.251777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.251819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.251862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.251907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.251950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.251992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.252958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.253973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.254010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.254054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.254100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.254144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.254185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.254664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.254710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.254763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.254811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.254858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.254907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.254955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.255992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.256035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.256084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.256127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.256165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.256206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.256249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.256284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.256327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.256370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.256413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.256453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.717 [2024-07-25 10:22:49.256495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.256536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.256578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.256620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.256663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.256703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.256751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.256784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.256829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.256867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.256909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.256949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.256989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.257032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.257071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.257110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.257163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.257206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.257245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.257291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.257334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.257381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.257430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.257480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.257973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.258971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.259975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.260719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.261169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.261211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.261248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.261289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.718 [2024-07-25 10:22:49.261323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.261360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.261404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.261444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.261487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.261528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.261568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.261608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.261650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.261692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.261744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.261795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.261845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.261892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.261939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.261985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.262994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.263916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.264403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:45.719 [2024-07-25 10:22:49.264458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.264504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.264551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.264596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.264645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.264688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.264740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.264778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.264821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.264866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.264907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.264947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.264986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.265981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.266032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.266078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.266125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.266173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.266218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.266265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.719 [2024-07-25 10:22:49.266311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.266355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.266402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.266456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.266503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.266547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.266594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.266639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.266684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.266732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.266778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.266827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.266877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.266919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.266966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.267015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.267064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.267114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.267160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.267623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.267664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.267706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.267751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.267796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.267846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.267888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.267925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.267964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.268970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.269991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.270039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.270080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.270115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.270152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.270188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.270229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.270272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.270316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.270360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.270862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.270913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.270968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.271959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.272006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.272056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.272100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.720 [2024-07-25 10:22:49.272147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.272970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.273645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.274962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.721 [2024-07-25 10:22:49.275889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.275936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.275981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.276892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.277372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.277418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.277464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.277503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.277545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.277588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.277629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.277673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.277718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.277760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.277803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.277846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.277887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.277931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.277977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.278984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.722 [2024-07-25 10:22:49.279945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.279989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.280025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.280064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.280104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.280143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.280182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.280639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.280684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.280734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.280772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.280809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.280847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.280888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.280930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.280971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.281973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.282980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.283028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.283074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.283120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.283164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.283216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.283271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.283313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.283361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.283409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.283906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.283955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.284971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.723 [2024-07-25 10:22:49.285012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.285970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.286016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.286061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.286108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.286156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.286205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.286250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.286294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.286341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.286387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.286433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.286479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.286527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.286574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.286617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.286659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.287975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.288971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.724 [2024-07-25 10:22:49.289766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.289810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.289855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.289892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.290433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.290476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.290519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.290566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.290610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.290656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.290704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.290754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.290801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.290849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.290892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.290935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.290983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.291990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.292981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.293021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.293063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.293109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.293148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.293195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.293236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.293706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.293755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.293797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.293836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.293878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.293916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.293959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.293999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.294955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.295011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.295055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.295101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.725 [2024-07-25 10:22:49.295148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.295970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.296010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.296049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.296089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.296128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.296166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.296211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.296258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.296296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.296338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.296382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.296427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.296474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.296522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.297973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.298966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.299852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.300311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.726 [2024-07-25 10:22:49.300354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.300394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.300430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.300471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.300515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.300556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.300601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.300644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.300684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.300731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.300770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.300816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.300862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.300906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.300955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.301971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.302988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.303039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.303087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.303138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.303615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.303668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.303722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.303774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.303818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.303869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.303916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.303964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.304007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.304052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.304093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.304133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.304173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.304224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.304268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.304312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.304352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.304392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.304434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.304475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.727 [2024-07-25 10:22:49.304513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.304555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.304595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.304636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.304679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.304729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.304773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.304811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.304850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.304889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.304933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.304974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.305983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.306030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.306079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.306128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.306177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.306225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.306276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.306323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.306371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.306421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.306466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.306512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.306562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.306607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.306653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.307976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.308993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.309038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.309088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.309136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.309184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.309231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.309274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.309314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.309353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.309393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.309443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.728 [2024-07-25 10:22:49.309482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.309530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.309569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.309620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.309663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.309704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.309744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.309792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.309834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.310379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.310431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.310479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.310532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.310582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.310627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.310672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.310722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.310771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.310820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.310868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.310913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.310960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.311963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.312998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.313037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.313084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.313125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.313167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.313207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.313679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.313741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.313790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.313840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.313887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.313937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.313992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.729 [2024-07-25 10:22:49.314818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.314857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.314895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.314939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.314983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.315967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.316014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.316061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.316111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.316156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.316206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.316256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.316305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.316358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.316412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.316458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.316505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.316553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:45.730 [2024-07-25 10:22:49.317036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.317983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.318995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.319037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.319085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.319124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.319174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.319212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.319253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.319293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.319333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.319381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.319425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.319469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.319514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.319558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.319601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.730 [2024-07-25 10:22:49.319647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.319690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.319739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.319790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.319836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.320318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.320368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.320412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.320458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.320511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.320557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.320606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.320660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.320710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.320766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.320812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.320864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.320911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.320958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.321989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.322947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.323001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.323044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.323089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.323141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.323621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.323666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.323699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.323749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.323791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.323831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.323873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.323913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.323957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.323994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.731 [2024-07-25 10:22:49.324763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.324811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.324857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.324904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.324954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.325993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.326026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.326069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.326108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.326152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.326201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.326243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.326284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.326330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.326369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.326410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.326951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.327966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.328965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.329017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.329061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.329111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.329156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.329199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.732 [2024-07-25 10:22:49.329237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.329281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.329322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.329363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.329397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.329437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.329486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.329527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.329575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.329616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.329656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.329704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.329750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.329793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.329841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.330964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.331983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.332980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.333025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.333066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.333110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.333152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.333625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.333668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.333711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.333759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.333804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.333843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.333879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.333930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.333974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.334022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.334071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.334118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.334167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.334210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.334255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.334303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.334352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.334402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.733 [2024-07-25 10:22:49.334451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.334496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.334543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.334590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.334636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.334686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.334741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.334797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.334844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.334891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.334939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.334986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.335998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.336042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.336085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.336123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.336163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.336203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.336246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.336295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.336333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.336378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.336413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.336922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.336970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.337967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.338950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.339001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.339050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.339098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.339146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.734 [2024-07-25 10:22:49.339195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.339239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.339287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.339337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.339381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.339429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.339477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.339527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.339575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.339624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.339673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.339732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.339775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.339809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.340978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.341955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.342970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.343012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.343054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.343581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.343628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.343669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.343707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.343753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.343793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.343840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.343888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.343932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.343982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.344033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.344080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.344127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.344170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.344217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.344265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.344314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.344362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.344411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.344457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.735 [2024-07-25 10:22:49.344505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.344552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.344602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.344646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.344694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.344746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.344793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.344840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.344888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.344937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.344990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.345981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.346028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.346067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.346099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.346143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.346181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.346222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.346264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.346307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.346350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.346390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.346435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.346977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.347963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.348008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.348055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.348102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.348151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.348202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.348258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.348310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.348356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.348404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.348452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.348502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.348549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.348598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.348647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.736 [2024-07-25 10:22:49.348692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.348732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.348775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.348818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.348861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.348905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.348947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.348996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.349780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.350998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.351963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.352010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.352051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.352087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.352133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.352172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.352219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.352260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.352301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.352344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.352384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.352420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.352458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.352496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.352538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.737 [2024-07-25 10:22:49.352579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.352619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.352663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.352702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.352752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.352795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.352831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.352871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.352915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.352956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.352996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.353035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.353084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.353266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.353614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.353667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.353720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.353765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.353815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.353863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.353906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.353944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.353985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.354018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.354057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.354096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.354138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.354183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.354226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.354269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.015 [2024-07-25 10:22:49.354318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.354986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.355989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.356034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.356069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.356106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.356146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.356186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.356232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.356269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.356316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.356357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.356872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.356921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.356963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.357976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.016 [2024-07-25 10:22:49.358787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.358835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.358878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.358925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.358962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.359707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.017 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.017 [2024-07-25 10:22:49.558386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.558440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.558481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.558519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.558558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.558601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.558640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.558678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.558725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.558768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.558808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.558847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.558883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.558919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.558961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.559971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.560959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.561000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.561043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.561084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.561124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.561166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.561333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.561376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.561418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.017 [2024-07-25 10:22:49.561462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.561501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.561541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.561583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.561626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.561670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.561711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.561759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.561805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.561850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.561899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.561948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.561997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.562859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.563374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.563417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.563454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.563498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.563538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.563579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.563623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.563665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.563713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.563759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.563802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.563847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.563894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.563940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.563988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.564971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.018 [2024-07-25 10:22:49.565902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.565944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.565983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.566999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.567043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.567089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.567139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.567185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.567230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.567278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.567325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.567374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.567420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.567470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.567516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.567564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.567613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.567661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.567712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.568413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.568463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.568512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.568552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.568597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.568637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.568676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.568709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.568757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.568801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.568840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.568879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.568918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.568962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.569974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.019 [2024-07-25 10:22:49.570870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.570905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.570951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.570996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.571973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.572999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.573975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.574020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.574067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.574118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.574597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.574638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.574678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.574720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.020 [2024-07-25 10:22:49.574761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.574804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.574847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.574890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.574931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.574971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.575955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.576973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.577014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.577058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.577102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.577146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.577187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.577226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.577270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.577305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.577345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.577384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.577426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.577909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.577957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.578994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.579041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.579087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.579134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.579178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.579230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.579283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.579332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.579380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.579427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.579476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.579520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.579569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.021 [2024-07-25 10:22:49.579614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.579659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.579705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.579755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.579805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.579854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.579897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.579938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.579977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.580673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.581977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.582985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.583993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.584469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.584519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.584563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.584610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.584658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.584706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.584756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.584803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.022 [2024-07-25 10:22:49.584851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.584895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.584946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.584989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.585998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.586988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.587034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.587080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.587123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.587169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.587218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:46.023 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:46.023 [2024-07-25 10:22:49.587721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.587771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:46.023 [2024-07-25 10:22:49.587818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.587866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.587911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.587957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.588961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.589004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.589049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.589092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.589137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.589174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.589224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.589270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.589317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.589367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.589419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.023 [2024-07-25 10:22:49.589465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.589513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.589566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.589615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.589662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.589709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.589756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.589801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.589848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.589893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.589941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.589989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.590034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.590085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.590137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.590178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.590216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.590253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.590293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.590337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.590377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.590420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.590469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.590509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.590984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.591973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.592966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.593729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.594206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.594260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.594308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.594355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.594405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.594448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.594497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.594546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.594593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.594638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.594684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.024 [2024-07-25 10:22:49.594739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.594787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.594837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.594901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.594943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.594988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.595980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.596989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.597029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.597069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.597265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.597604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.597656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.597706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.597757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.597806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.597856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.597904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.597954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.025 [2024-07-25 10:22:49.598839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.598880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.598922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.598968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.599985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.600032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.600082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.600134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.600179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.600226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.600278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.600328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.600375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.600421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.600470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.600950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.600996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.601977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.602992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.026 [2024-07-25 10:22:49.603685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.604979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.605970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.606990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.607040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.607517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.607566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.607609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.607643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.607685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.607736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.607784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.607822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.607864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.607903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.607945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.607986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.027 [2024-07-25 10:22:49.608857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.608891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.608937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.608986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.609993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.610046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.610095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.610142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.610187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.610233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.610279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.610325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.610789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.610831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.610876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.610916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.610958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.610999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.611998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.612987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.613026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.613065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.613103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.613146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.613191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.613232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.613280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.613320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.613364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.613403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.613437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.613479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.614372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.614425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.614471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.028 [2024-07-25 10:22:49.614522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.614573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.614624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.614670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.614721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.614768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.614816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.614867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.614912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.614958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.615957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.616989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.617973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.618024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.618076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.618121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.618165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.618627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.618674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.618724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.618766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.618803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.029 [2024-07-25 10:22:49.618837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.618877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.618916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.618956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.618998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.619959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.620992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.621972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.622995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.623043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.623091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.623143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.623190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.623239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.030 [2024-07-25 10:22:49.623287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.623959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.624007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.624051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.624101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.624149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.624196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.624242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.624291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.624798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.624846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.624894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.624936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.624979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.625991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.626988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.627033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.627081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.627129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.627179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.627225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.627270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.627310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.627344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.627390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.627434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.627471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.627511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.627552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.627593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.628065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.628108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.628149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.628189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.628227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.628264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.628305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.628346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.628390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.031 [2024-07-25 10:22:49.628431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.628472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.628514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.628556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.628598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.628640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.628683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.628728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.628773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.628816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.628862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.628908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.628957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.629996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.630803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.631991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.632988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.633036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.633093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.633141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.633186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.633232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.633276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.633318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.032 [2024-07-25 10:22:49.633359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.633397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.633430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.633470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.633514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.633554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.633597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.633637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.633684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.633733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.633776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.633816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.633862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.633894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.633936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.633974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.634014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.634055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.634101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.634138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.634668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.634711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.634753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.634800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.634850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.634894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.634956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.635987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.636956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.637003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.637046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.637096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.637144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.637192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.637238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.637285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.637332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.637375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.637421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.637472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.637960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.638012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.638057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.638105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.638152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:46.033 [2024-07-25 10:22:49.638199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.638248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.638293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.638335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.638378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.638423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.638463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.638505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.638551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.033 [2024-07-25 10:22:49.638588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.638626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.638671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.638708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.638752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.638799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.638843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.638894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.638937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.638981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.639993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.640918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.641629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.641679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.641737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.641786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.641831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.641877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.641924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.641975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.642980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.643023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.643068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.643101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.643142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.643186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.034 [2024-07-25 10:22:49.643226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.643965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.644990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.645985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.646952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.647002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.647050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.647096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.647145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.647194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.647241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.647289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.647333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.647372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.647412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.647453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.647929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.647980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.648019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.648065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.035 [2024-07-25 10:22:49.648098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.648994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.649992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.650762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.651957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.036 [2024-07-25 10:22:49.652961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.653960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.654004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.654047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.654082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.654613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.654665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.654708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.654755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.654796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.654836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.654872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.654920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.654962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.655989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.656972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.657023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.657062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.657107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.657143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.657181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.657223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.657265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.657306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.657350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.657391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.657431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.657925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.657973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.658021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.658069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.658116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.658161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.658207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.658254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.037 [2024-07-25 10:22:49.658302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.658345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.658393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.658442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.658493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.658537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.658585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.658633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.658683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.658736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.658785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.658830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.658880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.658927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.658973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.659963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.660706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.661976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.662020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.662057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.662095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.662134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.662174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.662217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.662255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.662303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.662347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.038 [2024-07-25 10:22:49.662385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.662425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.662463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.662507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.662554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.662601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.662643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.662691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.662741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.662786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.662835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.662884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.662932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.662979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.663985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.664025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.664489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.664535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.664576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.664625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.664668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.664710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.664756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.664799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.664842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.664883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.664924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.664971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.665977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.666991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.667032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.667073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.667106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.039 [2024-07-25 10:22:49.667149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.667190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.667231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.667273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.667315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.667854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.667899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.667937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.667980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.668973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.669978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.670853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.671960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.672007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.672046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.672096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.672135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.672172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.672214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.672263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.672302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.672341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.672381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.672423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.040 [2024-07-25 10:22:49.672463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.672508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.672549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.672588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.672632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.672677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.672724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.672765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.672808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.672850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.672892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.672946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.672996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.673995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.674043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.674545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.674592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.674634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.674679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.674726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.674770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.674815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.674847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.674888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.674933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.674973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.675969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.676998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.677049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.677093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.677130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.677170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.677211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.677251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.041 [2024-07-25 10:22:49.677292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.677793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.677842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.677892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.677949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.677995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.678991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.679976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.680024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.680066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.680109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.680150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.680196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.680240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.680281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.680318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.680366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.680412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.680461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.680512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.680557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.680603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.681963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.682003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.682045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.682087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.682134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.682181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.682215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.682256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.682298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.682343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.682389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.682431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.682477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.682520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.042 [2024-07-25 10:22:49.682560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.682606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.682648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.682691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.682737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.682786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.682836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.682883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.682930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.682977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.683966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.684012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.684473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.684519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.684561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.684604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.684643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.684683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.684735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.684781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.684821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.684863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.684910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.684951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.684990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.685996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.043 [2024-07-25 10:22:49.686808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.686855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.686908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.686954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.687002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.687046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.687092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.687138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.687188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.687239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.687709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.687765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.687807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.687847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.687897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.687937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.687980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.688974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.689963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.690002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.690054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.690093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.690138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.690178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.690219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.690263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.690300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.690339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.690384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.690425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.690465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.690507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:46.044 [2024-07-25 10:22:49.691029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.691955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.692001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.692046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.692093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.044 [2024-07-25 10:22:49.692138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.692977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.693866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.694360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.694410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.694463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.694511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.694556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.694601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.694644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.694691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.694744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.694794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.694842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.694894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.694948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.694995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.695977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.045 [2024-07-25 10:22:49.696850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.696898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.696947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.696996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.697043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.697092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.697140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.697191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.697654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.697703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.697750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.697792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.697836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.697877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.697921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.697964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.698998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.699981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.700026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.046 [2024-07-25 10:22:49.700073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.700120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.700171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.700220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.700269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.700319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.700372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.700421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.700918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.700970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.701972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.702966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.703758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.317 [2024-07-25 10:22:49.704951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.704998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.705996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.706953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.707000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.707553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.707605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.707649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.707700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.707755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.707804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.707853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.707899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.707945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.707990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.708999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.318 [2024-07-25 10:22:49.709936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.709977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.710020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.710061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.710104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.710145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.710179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.710219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.710264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.710307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.710358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.710402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.710448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.710487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.711978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.712977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.713759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.714983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.715029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.715074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.715120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.319 [2024-07-25 10:22:49.715169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.715973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.716972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.717014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.717056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.717100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.717607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.717656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.717704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.717759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.717806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.717851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.717899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.717948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.717995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.718979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.320 [2024-07-25 10:22:49.719985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.720031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.720077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.720124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.720170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.720217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.720264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.720313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.720360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.720411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.720466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.720951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.720996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.721970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.722984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.723733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.724566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.724624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.724670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.321 [2024-07-25 10:22:49.724728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.724781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.724829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.724877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.724924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.724970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.725979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.726974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.727964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.728997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.729040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.729085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.729125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.729165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.729202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.729241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.729284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.322 [2024-07-25 10:22:49.729328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.729371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.729413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.729460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.729501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.729550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.729602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.729654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.729702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.729756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.729809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.729856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.729901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.729947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.729993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.730039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.730086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.730137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.730182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.730228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.730279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.730328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.730374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.730417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.730454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.730503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.730549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.731344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.731393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.731443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.731492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.731543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.731594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.731647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.731699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.731752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.731802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.731852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.731898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.731944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.731996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.732959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.733995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.734034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.734076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.734125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.734166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.734205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.734253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.734286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.734460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.734506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.734550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.734587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.734629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.734671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.323 [2024-07-25 10:22:49.734711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.734761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.734807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.734855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.734900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.734948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.734995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.735971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.736993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.737036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.737082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.737127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.737166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.737208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.737254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.737791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.737841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.737884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.737931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.737977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.738971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.739014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.739054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.739096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.739139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.739181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.739223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.739263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.739303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.739341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.739381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.739425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.739467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.324 [2024-07-25 10:22:49.739508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.739549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.739591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.739638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.739681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.739730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.739779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.739828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.739879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.739926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.739974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.740022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.740067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.740113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.740161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.740205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.740252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.740300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.740348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.740395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.740445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.740504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.740554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.740602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.740796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.741975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.742992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.743900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:46.325 [2024-07-25 10:22:49.744407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.744454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.744494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.744545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.744577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.744618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.744662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.744707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.744754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.325 [2024-07-25 10:22:49.744792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.744833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.744873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.744914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.744952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.744993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 true 00:07:46.326 [2024-07-25 10:22:49.745857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.745954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.746979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.747021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.747066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.747111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.747143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.747692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.747738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.747781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.747819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.747866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.747912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.747955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.326 [2024-07-25 10:22:49.748985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.749988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.750028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.750070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.750113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.750152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.750192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.750240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.750282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.750327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.750367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.750409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.750905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.750962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.751964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.752997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.327 [2024-07-25 10:22:49.753649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.753688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.754998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.755974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.756968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.757427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.757470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.757517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.757555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.757602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.757651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.757695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.757752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.757794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.757830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.757868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.757908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.757947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.757987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.328 [2024-07-25 10:22:49.758802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.758846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.758893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.758938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.758985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.759988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.760029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.760071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.760111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.760152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.760675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.760736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.760788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.760833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.760880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.760926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.760976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.761982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.762986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.763028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.763071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.763116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.763157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.763199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.763240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.763285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.763329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.763370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.763411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.763452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.763953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.764003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.329 [2024-07-25 10:22:49.764052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.764967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.765968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.766782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.767981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.768028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.768070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.768121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.768170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.330 [2024-07-25 10:22:49.768219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.768933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:46.331 [2024-07-25 10:22:49.768976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.331 [2024-07-25 10:22:49.769338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.769999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.770048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.770093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.770570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.770612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.770652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.770699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.770747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.770786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.770828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.770861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.770904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.770948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.770991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.771980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.331 [2024-07-25 10:22:49.772851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.332 [2024-07-25 10:22:49.772894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.332 [2024-07-25 10:22:49.772932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:46.332 [2024-07-25 10:22:49.772974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:47.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.269 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.529 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:47.529 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:47.529 true 00:07:47.529 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:47.529 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.467 10:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.727 10:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:48.727 10:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:48.727 true 00:07:48.727 10:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:48.727 10:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.987 10:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.246 10:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:49.246 10:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:49.246 true 00:07:49.246 10:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:49.246 10:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.638 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.638 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:50.638 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:50.897 true 00:07:50.897 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:50.897 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.834 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.834 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:51.834 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:52.093 true 00:07:52.093 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:52.093 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.352 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.352 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:52.352 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:52.611 true 00:07:52.611 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:52.611 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.990 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.990 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:53.990 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:53.990 true 00:07:54.248 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:54.248 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.183 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.183 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:55.183 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:55.183 true 00:07:55.442 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:55.442 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.442 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.700 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:55.700 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:55.959 true 00:07:55.959 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:55.959 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.336 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.336 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:57.336 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:57.336 true 00:07:57.336 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:57.336 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.272 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.273 Initializing NVMe Controllers 00:07:58.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:58.273 Controller IO queue size 128, less than required. 00:07:58.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:58.273 Controller IO queue size 128, less than required. 00:07:58.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:58.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:58.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:58.273 Initialization complete. Launching workers. 00:07:58.273 ======================================================== 00:07:58.273 Latency(us) 00:07:58.273 Device Information : IOPS MiB/s Average min max 00:07:58.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3351.70 1.64 24462.51 1819.89 1010583.75 00:07:58.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16621.10 8.12 7681.89 2155.28 358906.11 00:07:58.273 ======================================================== 00:07:58.273 Total : 19972.80 9.75 10497.90 1819.89 1010583.75 00:07:58.273 00:07:58.532 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:58.532 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:58.532 true 00:07:58.532 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727857 00:07:58.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3727857) - No such process 00:07:58.532 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3727857 00:07:58.532 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.790 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.049 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:59.049 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:59.049 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:59.049 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.049 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:59.049 null0 00:07:59.307 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.307 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.307 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:59.307 null1 00:07:59.307 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.307 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.307 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:59.566 null2 00:07:59.566 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.566 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.566 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:59.823 null3 00:07:59.823 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.823 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.824 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:59.824 null4 00:07:59.824 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.824 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.824 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:00.081 null5 00:08:00.081 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.081 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.081 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:00.340 null6 00:08:00.340 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.340 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.340 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:00.340 null7 00:08:00.599 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.599 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.599 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:00.599 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.599 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.599 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.599 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:00.599 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.599 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:00.599 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.599 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3733677 3733678 3733680 3733682 3733684 3733686 3733687 3733689 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.600 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.860 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.119 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.379 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.379 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.379 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.379 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.379 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.379 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.379 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.379 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.639 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.898 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.899 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.899 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.899 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.899 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.181 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.181 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.181 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.181 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.181 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.181 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.181 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.181 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.444 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.444 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.444 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.444 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.444 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.445 10:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.445 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.445 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.445 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.445 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.445 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.445 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.445 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.445 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.704 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.704 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.705 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.964 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.964 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.964 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.964 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.964 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.964 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.964 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.964 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.964 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.964 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.964 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.223 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.482 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.482 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.482 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.482 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.482 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.482 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.482 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.482 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.482 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.482 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.482 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.482 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.483 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.483 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.483 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.483 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.483 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.483 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.483 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.483 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.483 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.483 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.483 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.483 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.742 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.742 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.742 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.743 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.003 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.003 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.003 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.003 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.003 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.003 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.003 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.003 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:04.263 rmmod nvme_tcp 00:08:04.263 rmmod nvme_fabrics 00:08:04.263 rmmod nvme_keyring 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3727465 ']' 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3727465 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3727465 ']' 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3727465 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3727465 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3727465' 00:08:04.263 killing process with pid 3727465 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3727465 00:08:04.263 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3727465 00:08:04.523 10:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:04.523 10:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:04.523 10:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:04.523 10:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:04.523 10:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:04.523 10:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.523 10:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.523 10:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:07.062 00:08:07.062 real 0m48.632s 00:08:07.062 user 3m7.995s 00:08:07.062 sys 0m21.167s 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:07.062 ************************************ 00:08:07.062 END TEST nvmf_ns_hotplug_stress 00:08:07.062 ************************************ 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.062 ************************************ 00:08:07.062 START TEST nvmf_delete_subsystem 00:08:07.062 ************************************ 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:07.062 * Looking for test storage... 00:08:07.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:07.062 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:13.633 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:13.633 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:13.633 Found net devices under 0000:af:00.0: cvl_0_0 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:13.633 Found net devices under 0000:af:00.1: cvl_0_1 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:13.633 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.634 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.634 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:13.634 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:13.634 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.634 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.634 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:13.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:08:13.634 00:08:13.634 --- 10.0.0.2 ping statistics --- 00:08:13.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.634 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:08:13.634 00:08:13.634 --- 10.0.0.1 ping statistics --- 00:08:13.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.634 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3738071 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3738071 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3738071 ']' 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.634 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.634 [2024-07-25 10:23:17.239960] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:08:13.634 [2024-07-25 10:23:17.240011] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.634 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.634 [2024-07-25 10:23:17.314296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:13.894 [2024-07-25 10:23:17.389521] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.894 [2024-07-25 10:23:17.389558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.894 [2024-07-25 10:23:17.389567] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.894 [2024-07-25 10:23:17.389577] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.894 [2024-07-25 10:23:17.389584] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.894 [2024-07-25 10:23:17.389628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.894 [2024-07-25 10:23:17.389631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.461 [2024-07-25 10:23:18.109144] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.461 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.462 [2024-07-25 10:23:18.125311] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.462 NULL1 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.462 Delay0 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3738344 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:14.462 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:14.720 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.720 [2024-07-25 10:23:18.209906] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:16.622 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.622 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.622 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 starting I/O failed: -6 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 starting I/O failed: -6 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 starting I/O failed: -6 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 starting I/O failed: -6 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 starting I/O failed: -6 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 starting I/O failed: -6 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 starting I/O failed: -6 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 starting I/O failed: -6 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 starting I/O failed: -6 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 starting I/O failed: -6 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 [2024-07-25 10:23:20.466596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd30000d000 is same with the state(5) to be set 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Write completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 Read completed with error (sct=0, sc=8) 00:08:16.881 [2024-07-25 10:23:20.467028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd300000c00 is same with the state(5) to be set 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 [2024-07-25 10:23:20.467247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd30000d660 is same with the state(5) to be set 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 starting I/O failed: -6 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 starting I/O failed: -6 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 starting I/O failed: -6 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 starting I/O failed: -6 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 starting I/O failed: -6 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 starting I/O failed: -6 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 starting I/O failed: -6 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 starting I/O failed: -6 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 starting I/O failed: -6 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 starting I/O failed: -6 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 starting I/O failed: -6 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 starting I/O failed: -6 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Read completed with error (sct=0, sc=8) 00:08:16.882 Write completed with error (sct=0, sc=8) 00:08:16.882 starting I/O failed: -6 00:08:16.882 [2024-07-25 10:23:20.467655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2266710 is same with the state(5) to be set 00:08:17.820 [2024-07-25 10:23:21.428503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246450 is same with the state(5) to be set 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Write completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Write completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Write completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Write completed with error (sct=0, sc=8) 00:08:17.820 Write completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Write completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Write completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Write completed with error (sct=0, sc=8) 00:08:17.820 [2024-07-25 10:23:21.469067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2266a40 is same with the state(5) to be set 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Write completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.820 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 [2024-07-25 10:23:21.469228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2245af0 is same with the state(5) to be set 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 [2024-07-25 10:23:21.469336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd30000d330 is same with the state(5) to be set 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 Read completed with error (sct=0, sc=8) 00:08:17.821 Write completed with error (sct=0, sc=8) 00:08:17.821 [2024-07-25 10:23:21.469496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2245910 is same with the state(5) to be set 00:08:17.821 Initializing NVMe Controllers 00:08:17.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:17.821 Controller IO queue size 128, less than required. 00:08:17.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:17.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:17.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:17.821 Initialization complete. Launching workers. 00:08:17.821 ======================================================== 00:08:17.821 Latency(us) 00:08:17.821 Device Information : IOPS MiB/s Average min max 00:08:17.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 185.65 0.09 951270.65 796.80 1012103.36 00:08:17.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.37 0.08 878051.13 446.87 1043564.05 00:08:17.821 ======================================================== 00:08:17.821 Total : 341.01 0.17 917911.54 446.87 1043564.05 00:08:17.821 00:08:17.821 [2024-07-25 10:23:21.470370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246450 (9): Bad file descriptor 00:08:17.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:17.821 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.821 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:17.821 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3738344 00:08:17.821 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3738344 00:08:18.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3738344) - No such process 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3738344 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3738344 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3738344 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.392 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.393 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.393 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:18.393 [2024-07-25 10:23:21.998946] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.393 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.393 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.393 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.393 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:18.393 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.393 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3738906 00:08:18.393 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:18.393 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:18.393 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3738906 00:08:18.393 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:18.393 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.393 [2024-07-25 10:23:22.066040] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:18.989 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:18.989 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3738906 00:08:18.989 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:19.557 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:19.557 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3738906 00:08:19.557 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:20.133 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:20.133 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3738906 00:08:20.133 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:20.392 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:20.392 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3738906 00:08:20.392 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:20.960 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:20.960 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3738906 00:08:20.960 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:21.529 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:21.529 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3738906 00:08:21.529 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:21.529 Initializing NVMe Controllers 00:08:21.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:21.529 Controller IO queue size 128, less than required. 00:08:21.529 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:21.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:21.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:21.529 Initialization complete. Launching workers. 00:08:21.529 ======================================================== 00:08:21.529 Latency(us) 00:08:21.529 Device Information : IOPS MiB/s Average min max 00:08:21.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002989.31 1000227.24 1010403.80 00:08:21.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005083.91 1000316.53 1041297.61 00:08:21.529 ======================================================== 00:08:21.529 Total : 256.00 0.12 1004036.61 1000227.24 1041297.61 00:08:21.529 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3738906 00:08:22.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3738906) - No such process 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3738906 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:22.097 rmmod nvme_tcp 00:08:22.097 rmmod nvme_fabrics 00:08:22.097 rmmod nvme_keyring 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3738071 ']' 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3738071 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3738071 ']' 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3738071 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:22.097 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.098 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3738071 00:08:22.098 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:22.098 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:22.098 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3738071' 00:08:22.098 killing process with pid 3738071 00:08:22.098 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3738071 00:08:22.098 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3738071 00:08:22.357 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:22.357 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:22.357 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:22.357 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:22.357 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:22.357 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.357 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.357 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.265 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:24.265 00:08:24.265 real 0m17.699s 00:08:24.265 user 0m30.165s 00:08:24.265 sys 0m6.999s 00:08:24.265 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.265 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.265 ************************************ 00:08:24.265 END TEST nvmf_delete_subsystem 00:08:24.265 ************************************ 00:08:24.525 10:23:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:24.525 10:23:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:24.525 10:23:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.525 10:23:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:24.525 ************************************ 00:08:24.525 START TEST nvmf_host_management 00:08:24.525 ************************************ 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:24.525 * Looking for test storage... 00:08:24.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:24.525 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:31.096 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:31.096 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:31.096 Found net devices under 0000:af:00.0: cvl_0_0 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:31.096 Found net devices under 0000:af:00.1: cvl_0_1 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.096 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:08:31.096 00:08:31.096 --- 10.0.0.2 ping statistics --- 00:08:31.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.096 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:08:31.097 00:08:31.097 --- 10.0.0.1 ping statistics --- 00:08:31.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.097 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3743251 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3743251 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3743251 ']' 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.097 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.355 [2024-07-25 10:23:34.804634] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:08:31.355 [2024-07-25 10:23:34.804678] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.355 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.355 [2024-07-25 10:23:34.879472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.355 [2024-07-25 10:23:34.950042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.355 [2024-07-25 10:23:34.950085] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.355 [2024-07-25 10:23:34.950094] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.355 [2024-07-25 10:23:34.950102] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.355 [2024-07-25 10:23:34.950108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.355 [2024-07-25 10:23:34.950214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.355 [2024-07-25 10:23:34.950295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.355 [2024-07-25 10:23:34.950388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.355 [2024-07-25 10:23:34.950389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:31.921 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.921 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:31.921 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.921 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:31.921 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.180 [2024-07-25 10:23:35.663092] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.180 Malloc0 00:08:32.180 [2024-07-25 10:23:35.729871] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3743423 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3743423 /var/tmp/bdevperf.sock 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3743423 ']' 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:32.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:32.180 { 00:08:32.180 "params": { 00:08:32.180 "name": "Nvme$subsystem", 00:08:32.180 "trtype": "$TEST_TRANSPORT", 00:08:32.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:32.180 "adrfam": "ipv4", 00:08:32.180 "trsvcid": "$NVMF_PORT", 00:08:32.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:32.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:32.180 "hdgst": ${hdgst:-false}, 00:08:32.180 "ddgst": ${ddgst:-false} 00:08:32.180 }, 00:08:32.180 "method": "bdev_nvme_attach_controller" 00:08:32.180 } 00:08:32.180 EOF 00:08:32.180 )") 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:32.180 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:32.180 "params": { 00:08:32.180 "name": "Nvme0", 00:08:32.180 "trtype": "tcp", 00:08:32.180 "traddr": "10.0.0.2", 00:08:32.180 "adrfam": "ipv4", 00:08:32.180 "trsvcid": "4420", 00:08:32.180 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:32.180 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:32.180 "hdgst": false, 00:08:32.180 "ddgst": false 00:08:32.180 }, 00:08:32.180 "method": "bdev_nvme_attach_controller" 00:08:32.180 }' 00:08:32.180 [2024-07-25 10:23:35.831446] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:08:32.180 [2024-07-25 10:23:35.831496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3743423 ] 00:08:32.180 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.439 [2024-07-25 10:23:35.902124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.439 [2024-07-25 10:23:35.970943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.439 Running I/O for 10 seconds... 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.007 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.267 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:08:33.267 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:08:33.267 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:33.267 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:33.267 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:33.267 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:33.267 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.267 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.267 [2024-07-25 10:23:36.725514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.267 [2024-07-25 10:23:36.725552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.267 [2024-07-25 10:23:36.725570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.267 [2024-07-25 10:23:36.725580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.267 [2024-07-25 10:23:36.725592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.267 [2024-07-25 10:23:36.725602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.267 [2024-07-25 10:23:36.725614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.267 [2024-07-25 10:23:36.725623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.267 [2024-07-25 10:23:36.725635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.267 [2024-07-25 10:23:36.725644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.267 [2024-07-25 10:23:36.725655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.267 [2024-07-25 10:23:36.725664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.267 [2024-07-25 10:23:36.725675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.725685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.725695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.725704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.725730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.725744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.725755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.725764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.725774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.725784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.725794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.725803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.725813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.725822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.725833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.725841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.725852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.725861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.725872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.725881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.725891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.725900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.725911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.725920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.725931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.725940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.725951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.725960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.725971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.725980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.725993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.268 [2024-07-25 10:23:36.726433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.268 [2024-07-25 10:23:36.726442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.269 [2024-07-25 10:23:36.726851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.726918] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe7ea30 was disconnected and freed. reset controller. 00:08:33.269 [2024-07-25 10:23:36.727804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:33.269 task offset: 121088 on job bdev=Nvme0n1 fails 00:08:33.269 00:08:33.269 Latency(us) 00:08:33.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.269 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:33.269 Job: Nvme0n1 ended in about 0.60 seconds with error 00:08:33.269 Verification LBA range: start 0x0 length 0x400 00:08:33.269 Nvme0n1 : 0.60 1489.38 93.09 106.38 0.00 39379.76 2018.51 39426.46 00:08:33.269 =================================================================================================================== 00:08:33.269 Total : 1489.38 93.09 106.38 0.00 39379.76 2018.51 39426.46 00:08:33.269 [2024-07-25 10:23:36.729337] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.269 [2024-07-25 10:23:36.729355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6da70 (9): Bad file descriptor 00:08:33.269 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.269 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:33.269 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.269 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.269 [2024-07-25 10:23:36.732850] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:08:33.269 [2024-07-25 10:23:36.732987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:08:33.269 [2024-07-25 10:23:36.733014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.269 [2024-07-25 10:23:36.733035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:08:33.269 [2024-07-25 10:23:36.733045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:08:33.269 [2024-07-25 10:23:36.733055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:08:33.269 [2024-07-25 10:23:36.733064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6da70 00:08:33.269 [2024-07-25 10:23:36.733086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6da70 (9): Bad file descriptor 00:08:33.269 [2024-07-25 10:23:36.733099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:08:33.269 [2024-07-25 10:23:36.733109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:08:33.269 [2024-07-25 10:23:36.733119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:08:33.269 [2024-07-25 10:23:36.733133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:33.269 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.269 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:34.206 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3743423 00:08:34.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3743423) - No such process 00:08:34.206 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:34.206 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:34.206 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:34.206 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:34.206 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:34.206 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:34.207 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:34.207 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:34.207 { 00:08:34.207 "params": { 00:08:34.207 "name": "Nvme$subsystem", 00:08:34.207 "trtype": "$TEST_TRANSPORT", 00:08:34.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:34.207 "adrfam": "ipv4", 00:08:34.207 "trsvcid": "$NVMF_PORT", 00:08:34.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:34.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:34.207 "hdgst": ${hdgst:-false}, 00:08:34.207 "ddgst": ${ddgst:-false} 00:08:34.207 }, 00:08:34.207 "method": "bdev_nvme_attach_controller" 00:08:34.207 } 00:08:34.207 EOF 00:08:34.207 )") 00:08:34.207 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:34.207 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:34.207 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:34.207 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:34.207 "params": { 00:08:34.207 "name": "Nvme0", 00:08:34.207 "trtype": "tcp", 00:08:34.207 "traddr": "10.0.0.2", 00:08:34.207 "adrfam": "ipv4", 00:08:34.207 "trsvcid": "4420", 00:08:34.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:34.207 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:34.207 "hdgst": false, 00:08:34.207 "ddgst": false 00:08:34.207 }, 00:08:34.207 "method": "bdev_nvme_attach_controller" 00:08:34.207 }' 00:08:34.207 [2024-07-25 10:23:37.795362] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:08:34.207 [2024-07-25 10:23:37.795415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3743758 ] 00:08:34.207 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.207 [2024-07-25 10:23:37.866854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.465 [2024-07-25 10:23:37.934062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.724 Running I/O for 1 seconds... 00:08:35.665 00:08:35.665 Latency(us) 00:08:35.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.665 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:35.665 Verification LBA range: start 0x0 length 0x400 00:08:35.665 Nvme0n1 : 1.04 1482.57 92.66 0.00 0.00 42612.68 8441.04 39216.74 00:08:35.665 =================================================================================================================== 00:08:35.665 Total : 1482.57 92.66 0.00 0.00 42612.68 8441.04 39216.74 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.934 rmmod nvme_tcp 00:08:35.934 rmmod nvme_fabrics 00:08:35.934 rmmod nvme_keyring 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3743251 ']' 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3743251 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3743251 ']' 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3743251 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3743251 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3743251' 00:08:35.934 killing process with pid 3743251 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3743251 00:08:35.934 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3743251 00:08:36.193 [2024-07-25 10:23:39.780010] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:36.194 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:36.194 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:36.194 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:36.194 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.194 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:36.194 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.194 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.194 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.732 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:38.732 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:38.732 00:08:38.732 real 0m13.858s 00:08:38.732 user 0m23.556s 00:08:38.732 sys 0m6.406s 00:08:38.732 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.732 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.732 ************************************ 00:08:38.732 END TEST nvmf_host_management 00:08:38.732 ************************************ 00:08:38.732 10:23:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:38.732 10:23:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.732 10:23:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.732 10:23:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.732 ************************************ 00:08:38.732 START TEST nvmf_lvol 00:08:38.732 ************************************ 00:08:38.732 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:38.732 * Looking for test storage... 00:08:38.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.732 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:38.733 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:45.307 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:45.307 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:45.307 Found net devices under 0000:af:00.0: cvl_0_0 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:45.307 Found net devices under 0000:af:00.1: cvl_0_1 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:45.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:08:45.307 00:08:45.307 --- 10.0.0.2 ping statistics --- 00:08:45.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.307 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:08:45.307 00:08:45.307 --- 10.0.0.1 ping statistics --- 00:08:45.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.307 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:08:45.307 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3747799 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3747799 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3747799 ']' 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.308 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:45.308 [2024-07-25 10:23:48.952031] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:08:45.308 [2024-07-25 10:23:48.952075] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.308 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.568 [2024-07-25 10:23:49.026138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:45.568 [2024-07-25 10:23:49.095224] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.568 [2024-07-25 10:23:49.095269] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.568 [2024-07-25 10:23:49.095279] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.568 [2024-07-25 10:23:49.095287] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.568 [2024-07-25 10:23:49.095294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.568 [2024-07-25 10:23:49.095345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.568 [2024-07-25 10:23:49.095441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.568 [2024-07-25 10:23:49.095443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.136 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.136 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:46.136 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:46.136 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:46.136 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:46.136 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.136 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:46.396 [2024-07-25 10:23:49.944084] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.396 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:46.655 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:46.655 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:46.655 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:46.655 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:46.914 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:47.174 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=37709ad2-c757-486c-8fbd-2fd3e8649838 00:08:47.174 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 37709ad2-c757-486c-8fbd-2fd3e8649838 lvol 20 00:08:47.433 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=23c4457c-8ef1-4d52-9d3a-2d75f7aca949 00:08:47.433 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:47.433 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 23c4457c-8ef1-4d52-9d3a-2d75f7aca949 00:08:47.692 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:47.951 [2024-07-25 10:23:51.442585] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.951 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:47.951 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3748223 00:08:47.951 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:47.951 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:48.210 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.147 10:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 23c4457c-8ef1-4d52-9d3a-2d75f7aca949 MY_SNAPSHOT 00:08:49.407 10:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=671ddd77-312b-46b8-a757-e1a4a7d2447a 00:08:49.407 10:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 23c4457c-8ef1-4d52-9d3a-2d75f7aca949 30 00:08:49.407 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 671ddd77-312b-46b8-a757-e1a4a7d2447a MY_CLONE 00:08:49.666 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=25b272f5-4e01-486c-ac21-1719220c1abb 00:08:49.666 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 25b272f5-4e01-486c-ac21-1719220c1abb 00:08:50.234 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3748223 00:08:58.357 Initializing NVMe Controllers 00:08:58.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:58.357 Controller IO queue size 128, less than required. 00:08:58.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:58.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:58.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:58.357 Initialization complete. Launching workers. 00:08:58.357 ======================================================== 00:08:58.357 Latency(us) 00:08:58.357 Device Information : IOPS MiB/s Average min max 00:08:58.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12854.80 50.21 9961.30 1636.00 56006.75 00:08:58.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12684.60 49.55 10095.07 3643.45 54210.19 00:08:58.357 ======================================================== 00:08:58.357 Total : 25539.39 99.76 10027.74 1636.00 56006.75 00:08:58.357 00:08:58.357 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:58.615 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 23c4457c-8ef1-4d52-9d3a-2d75f7aca949 00:08:58.874 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37709ad2-c757-486c-8fbd-2fd3e8649838 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.134 rmmod nvme_tcp 00:08:59.134 rmmod nvme_fabrics 00:08:59.134 rmmod nvme_keyring 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3747799 ']' 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3747799 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3747799 ']' 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3747799 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3747799 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3747799' 00:08:59.134 killing process with pid 3747799 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3747799 00:08:59.134 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3747799 00:08:59.395 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:59.395 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:59.395 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:59.395 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.395 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:59.395 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.395 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.395 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:01.932 00:09:01.932 real 0m23.079s 00:09:01.932 user 1m2.542s 00:09:01.932 sys 0m9.897s 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:01.932 ************************************ 00:09:01.932 END TEST nvmf_lvol 00:09:01.932 ************************************ 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.932 ************************************ 00:09:01.932 START TEST nvmf_lvs_grow 00:09:01.932 ************************************ 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:01.932 * Looking for test storage... 00:09:01.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:09:01.932 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:08.506 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:08.506 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:08.506 Found net devices under 0000:af:00.0: cvl_0_0 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:08.506 Found net devices under 0000:af:00.1: cvl_0_1 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.506 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.507 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.507 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.507 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:08.507 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.507 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.507 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.507 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:09:08.507 00:09:08.507 --- 10.0.0.2 ping statistics --- 00:09:08.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.507 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:09:08.507 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:09:08.766 00:09:08.766 --- 10.0.0.1 ping statistics --- 00:09:08.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.766 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3754473 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3754473 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3754473 ']' 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.766 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.766 [2024-07-25 10:24:12.299512] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:08.766 [2024-07-25 10:24:12.299557] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.766 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.766 [2024-07-25 10:24:12.374418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.766 [2024-07-25 10:24:12.443274] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.766 [2024-07-25 10:24:12.443318] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.766 [2024-07-25 10:24:12.443328] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.766 [2024-07-25 10:24:12.443336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.766 [2024-07-25 10:24:12.443343] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.766 [2024-07-25 10:24:12.443369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:09.704 [2024-07-25 10:24:13.298394] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.704 ************************************ 00:09:09.704 START TEST lvs_grow_clean 00:09:09.704 ************************************ 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.704 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:09.963 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:09.963 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:10.221 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b5cf0c9f-9500-4d1f-a42d-c2ca40c417d3 00:09:10.221 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5cf0c9f-9500-4d1f-a42d-c2ca40c417d3 00:09:10.221 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:10.221 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:10.221 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:10.221 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b5cf0c9f-9500-4d1f-a42d-c2ca40c417d3 lvol 150 00:09:10.480 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b1635a37-46b1-4c10-a623-054964a672c8 00:09:10.480 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.480 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:10.739 [2024-07-25 10:24:14.243898] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:10.739 [2024-07-25 10:24:14.243952] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:10.739 true 00:09:10.739 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5cf0c9f-9500-4d1f-a42d-c2ca40c417d3 00:09:10.739 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:10.739 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:10.739 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:10.998 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b1635a37-46b1-4c10-a623-054964a672c8 00:09:11.257 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:11.257 [2024-07-25 10:24:14.889857] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.257 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:11.516 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3754900 00:09:11.516 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:11.516 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3754900 /var/tmp/bdevperf.sock 00:09:11.516 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:11.516 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3754900 ']' 00:09:11.516 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:11.516 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:11.516 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:11.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:11.516 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:11.516 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:11.516 [2024-07-25 10:24:15.104818] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:11.516 [2024-07-25 10:24:15.104869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754900 ] 00:09:11.516 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.516 [2024-07-25 10:24:15.174359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.774 [2024-07-25 10:24:15.248294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.341 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.341 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:12.341 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:12.599 Nvme0n1 00:09:12.599 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:12.858 [ 00:09:12.858 { 00:09:12.858 "name": "Nvme0n1", 00:09:12.858 "aliases": [ 00:09:12.858 "b1635a37-46b1-4c10-a623-054964a672c8" 00:09:12.858 ], 00:09:12.858 "product_name": "NVMe disk", 00:09:12.858 "block_size": 4096, 00:09:12.858 "num_blocks": 38912, 00:09:12.858 "uuid": "b1635a37-46b1-4c10-a623-054964a672c8", 00:09:12.858 "assigned_rate_limits": { 00:09:12.858 "rw_ios_per_sec": 0, 00:09:12.858 "rw_mbytes_per_sec": 0, 00:09:12.858 "r_mbytes_per_sec": 0, 00:09:12.858 "w_mbytes_per_sec": 0 00:09:12.858 }, 00:09:12.858 "claimed": false, 00:09:12.858 "zoned": false, 00:09:12.858 "supported_io_types": { 00:09:12.858 "read": true, 00:09:12.858 "write": true, 00:09:12.858 "unmap": true, 00:09:12.858 "flush": true, 00:09:12.858 "reset": true, 00:09:12.858 "nvme_admin": true, 00:09:12.858 "nvme_io": true, 00:09:12.858 "nvme_io_md": false, 00:09:12.858 "write_zeroes": true, 00:09:12.858 "zcopy": false, 00:09:12.858 "get_zone_info": false, 00:09:12.858 "zone_management": false, 00:09:12.858 "zone_append": false, 00:09:12.858 "compare": true, 00:09:12.858 "compare_and_write": true, 00:09:12.858 "abort": true, 00:09:12.858 "seek_hole": false, 00:09:12.858 "seek_data": false, 00:09:12.858 "copy": true, 00:09:12.858 "nvme_iov_md": false 00:09:12.858 }, 00:09:12.858 "memory_domains": [ 00:09:12.858 { 00:09:12.858 "dma_device_id": "system", 00:09:12.858 "dma_device_type": 1 00:09:12.858 } 00:09:12.858 ], 00:09:12.858 "driver_specific": { 00:09:12.858 "nvme": [ 00:09:12.858 { 00:09:12.858 "trid": { 00:09:12.858 "trtype": "TCP", 00:09:12.858 "adrfam": "IPv4", 00:09:12.858 "traddr": "10.0.0.2", 00:09:12.858 "trsvcid": "4420", 00:09:12.858 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:12.858 }, 00:09:12.858 "ctrlr_data": { 00:09:12.858 "cntlid": 1, 00:09:12.858 "vendor_id": "0x8086", 00:09:12.858 "model_number": "SPDK bdev Controller", 00:09:12.858 "serial_number": "SPDK0", 00:09:12.858 "firmware_revision": "24.09", 00:09:12.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:12.858 "oacs": { 00:09:12.858 "security": 0, 00:09:12.859 "format": 0, 00:09:12.859 "firmware": 0, 00:09:12.859 "ns_manage": 0 00:09:12.859 }, 00:09:12.859 "multi_ctrlr": true, 00:09:12.859 "ana_reporting": false 00:09:12.859 }, 00:09:12.859 "vs": { 00:09:12.859 "nvme_version": "1.3" 00:09:12.859 }, 00:09:12.859 "ns_data": { 00:09:12.859 "id": 1, 00:09:12.859 "can_share": true 00:09:12.859 } 00:09:12.859 } 00:09:12.859 ], 00:09:12.859 "mp_policy": "active_passive" 00:09:12.859 } 00:09:12.859 } 00:09:12.859 ] 00:09:12.859 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3755143 00:09:12.859 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:12.859 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:12.859 Running I/O for 10 seconds... 00:09:13.801 Latency(us) 00:09:13.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.801 Nvme0n1 : 1.00 23737.00 92.72 0.00 0.00 0.00 0.00 0.00 00:09:13.801 =================================================================================================================== 00:09:13.801 Total : 23737.00 92.72 0.00 0.00 0.00 0.00 0.00 00:09:13.801 00:09:14.739 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b5cf0c9f-9500-4d1f-a42d-c2ca40c417d3 00:09:14.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.739 Nvme0n1 : 2.00 23933.00 93.49 0.00 0.00 0.00 0.00 0.00 00:09:14.739 =================================================================================================================== 00:09:14.739 Total : 23933.00 93.49 0.00 0.00 0.00 0.00 0.00 00:09:14.739 00:09:15.035 true 00:09:15.035 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5cf0c9f-9500-4d1f-a42d-c2ca40c417d3 00:09:15.035 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:15.035 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:15.035 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:15.035 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3755143 00:09:15.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.972 Nvme0n1 : 3.00 23976.67 93.66 0.00 0.00 0.00 0.00 0.00 00:09:15.972 =================================================================================================================== 00:09:15.972 Total : 23976.67 93.66 0.00 0.00 0.00 0.00 0.00 00:09:15.972 00:09:16.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.909 Nvme0n1 : 4.00 24030.50 93.87 0.00 0.00 0.00 0.00 0.00 00:09:16.909 =================================================================================================================== 00:09:16.909 Total : 24030.50 93.87 0.00 0.00 0.00 0.00 0.00 00:09:16.909 00:09:17.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.846 Nvme0n1 : 5.00 24088.40 94.10 0.00 0.00 0.00 0.00 0.00 00:09:17.846 =================================================================================================================== 00:09:17.846 Total : 24088.40 94.10 0.00 0.00 0.00 0.00 0.00 00:09:17.846 00:09:18.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.784 Nvme0n1 : 6.00 24129.83 94.26 0.00 0.00 0.00 0.00 0.00 00:09:18.784 =================================================================================================================== 00:09:18.784 Total : 24129.83 94.26 0.00 0.00 0.00 0.00 0.00 00:09:18.784 00:09:20.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.164 Nvme0n1 : 7.00 24161.86 94.38 0.00 0.00 0.00 0.00 0.00 00:09:20.164 =================================================================================================================== 00:09:20.164 Total : 24161.86 94.38 0.00 0.00 0.00 0.00 0.00 00:09:20.164 00:09:21.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.101 Nvme0n1 : 8.00 24191.25 94.50 0.00 0.00 0.00 0.00 0.00 00:09:21.101 =================================================================================================================== 00:09:21.101 Total : 24191.25 94.50 0.00 0.00 0.00 0.00 0.00 00:09:21.101 00:09:22.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.038 Nvme0n1 : 9.00 24179.00 94.45 0.00 0.00 0.00 0.00 0.00 00:09:22.038 =================================================================================================================== 00:09:22.038 Total : 24179.00 94.45 0.00 0.00 0.00 0.00 0.00 00:09:22.038 00:09:22.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.977 Nvme0n1 : 10.00 24204.20 94.55 0.00 0.00 0.00 0.00 0.00 00:09:22.977 =================================================================================================================== 00:09:22.977 Total : 24204.20 94.55 0.00 0.00 0.00 0.00 0.00 00:09:22.977 00:09:22.977 00:09:22.977 Latency(us) 00:09:22.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.977 Nvme0n1 : 10.01 24204.69 94.55 0.00 0.00 5284.73 3316.12 14155.78 00:09:22.977 =================================================================================================================== 00:09:22.977 Total : 24204.69 94.55 0.00 0.00 5284.73 3316.12 14155.78 00:09:22.977 0 00:09:22.977 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3754900 00:09:22.977 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3754900 ']' 00:09:22.977 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3754900 00:09:22.977 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:22.977 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.978 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3754900 00:09:22.978 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:22.978 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:22.978 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3754900' 00:09:22.978 killing process with pid 3754900 00:09:22.978 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3754900 00:09:22.978 Received shutdown signal, test time was about 10.000000 seconds 00:09:22.978 00:09:22.978 Latency(us) 00:09:22.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.978 =================================================================================================================== 00:09:22.978 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:22.978 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3754900 00:09:23.237 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:23.237 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:23.496 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5cf0c9f-9500-4d1f-a42d-c2ca40c417d3 00:09:23.496 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:23.757 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:23.757 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:23.757 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:23.757 [2024-07-25 10:24:27.386276] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:23.757 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5cf0c9f-9500-4d1f-a42d-c2ca40c417d3 00:09:23.757 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:23.757 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5cf0c9f-9500-4d1f-a42d-c2ca40c417d3 00:09:23.757 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.757 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.757 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.757 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.757 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.757 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.757 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.757 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:23.757 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5cf0c9f-9500-4d1f-a42d-c2ca40c417d3 00:09:24.016 request: 00:09:24.016 { 00:09:24.016 "uuid": "b5cf0c9f-9500-4d1f-a42d-c2ca40c417d3", 00:09:24.016 "method": "bdev_lvol_get_lvstores", 00:09:24.016 "req_id": 1 00:09:24.016 } 00:09:24.016 Got JSON-RPC error response 00:09:24.016 response: 00:09:24.016 { 00:09:24.016 "code": -19, 00:09:24.016 "message": "No such device" 00:09:24.016 } 00:09:24.016 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:24.016 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:24.016 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:24.016 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:24.016 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.276 aio_bdev 00:09:24.276 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b1635a37-46b1-4c10-a623-054964a672c8 00:09:24.276 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=b1635a37-46b1-4c10-a623-054964a672c8 00:09:24.276 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.276 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:24.276 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.276 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.276 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:24.276 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b1635a37-46b1-4c10-a623-054964a672c8 -t 2000 00:09:24.535 [ 00:09:24.535 { 00:09:24.535 "name": "b1635a37-46b1-4c10-a623-054964a672c8", 00:09:24.535 "aliases": [ 00:09:24.535 "lvs/lvol" 00:09:24.535 ], 00:09:24.535 "product_name": "Logical Volume", 00:09:24.535 "block_size": 4096, 00:09:24.535 "num_blocks": 38912, 00:09:24.535 "uuid": "b1635a37-46b1-4c10-a623-054964a672c8", 00:09:24.535 "assigned_rate_limits": { 00:09:24.535 "rw_ios_per_sec": 0, 00:09:24.535 "rw_mbytes_per_sec": 0, 00:09:24.535 "r_mbytes_per_sec": 0, 00:09:24.535 "w_mbytes_per_sec": 0 00:09:24.535 }, 00:09:24.535 "claimed": false, 00:09:24.535 "zoned": false, 00:09:24.535 "supported_io_types": { 00:09:24.535 "read": true, 00:09:24.535 "write": true, 00:09:24.535 "unmap": true, 00:09:24.535 "flush": false, 00:09:24.535 "reset": true, 00:09:24.535 "nvme_admin": false, 00:09:24.535 "nvme_io": false, 00:09:24.535 "nvme_io_md": false, 00:09:24.535 "write_zeroes": true, 00:09:24.535 "zcopy": false, 00:09:24.535 "get_zone_info": false, 00:09:24.535 "zone_management": false, 00:09:24.535 "zone_append": false, 00:09:24.535 "compare": false, 00:09:24.535 "compare_and_write": false, 00:09:24.535 "abort": false, 00:09:24.535 "seek_hole": true, 00:09:24.535 "seek_data": true, 00:09:24.535 "copy": false, 00:09:24.535 "nvme_iov_md": false 00:09:24.535 }, 00:09:24.535 "driver_specific": { 00:09:24.535 "lvol": { 00:09:24.535 "lvol_store_uuid": "b5cf0c9f-9500-4d1f-a42d-c2ca40c417d3", 00:09:24.535 "base_bdev": "aio_bdev", 00:09:24.535 "thin_provision": false, 00:09:24.535 "num_allocated_clusters": 38, 00:09:24.535 "snapshot": false, 00:09:24.535 "clone": false, 00:09:24.535 "esnap_clone": false 00:09:24.535 } 00:09:24.535 } 00:09:24.535 } 00:09:24.535 ] 00:09:24.535 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:24.535 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5cf0c9f-9500-4d1f-a42d-c2ca40c417d3 00:09:24.535 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:24.795 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:24.795 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5cf0c9f-9500-4d1f-a42d-c2ca40c417d3 00:09:24.795 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:24.795 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:24.795 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b1635a37-46b1-4c10-a623-054964a672c8 00:09:25.054 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5cf0c9f-9500-4d1f-a42d-c2ca40c417d3 00:09:25.314 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:25.314 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.314 00:09:25.314 real 0m15.615s 00:09:25.314 user 0m14.654s 00:09:25.314 sys 0m2.010s 00:09:25.314 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.314 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:25.314 ************************************ 00:09:25.314 END TEST lvs_grow_clean 00:09:25.314 ************************************ 00:09:25.314 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:25.314 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:25.314 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.314 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:25.573 ************************************ 00:09:25.573 START TEST lvs_grow_dirty 00:09:25.573 ************************************ 00:09:25.573 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:25.573 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:25.573 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:25.573 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:25.573 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:25.573 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:25.573 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:25.573 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.573 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.573 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:25.832 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:25.833 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:25.833 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bb7e5c86-11c4-47ab-985a-81d4f776f32c 00:09:25.833 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb7e5c86-11c4-47ab-985a-81d4f776f32c 00:09:25.833 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:26.092 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:26.092 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:26.092 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bb7e5c86-11c4-47ab-985a-81d4f776f32c lvol 150 00:09:26.092 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f4bbea06-c0a8-4d00-a011-43822f5b5e98 00:09:26.092 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:26.092 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:26.352 [2024-07-25 10:24:29.943150] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:26.352 [2024-07-25 10:24:29.943199] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:26.352 true 00:09:26.352 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb7e5c86-11c4-47ab-985a-81d4f776f32c 00:09:26.352 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:26.611 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:26.611 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:26.611 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f4bbea06-c0a8-4d00-a011-43822f5b5e98 00:09:26.869 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:27.128 [2024-07-25 10:24:30.645241] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.128 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.128 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3757701 00:09:27.128 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:27.128 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3757701 /var/tmp/bdevperf.sock 00:09:27.128 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3757701 ']' 00:09:27.128 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:27.128 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:27.128 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:27.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:27.129 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:27.129 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.129 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:27.388 [2024-07-25 10:24:30.867449] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:27.388 [2024-07-25 10:24:30.867505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757701 ] 00:09:27.388 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.388 [2024-07-25 10:24:30.938877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.388 [2024-07-25 10:24:31.012737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.347 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.347 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:28.347 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:28.347 Nvme0n1 00:09:28.347 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:28.606 [ 00:09:28.606 { 00:09:28.606 "name": "Nvme0n1", 00:09:28.606 "aliases": [ 00:09:28.606 "f4bbea06-c0a8-4d00-a011-43822f5b5e98" 00:09:28.606 ], 00:09:28.606 "product_name": "NVMe disk", 00:09:28.606 "block_size": 4096, 00:09:28.606 "num_blocks": 38912, 00:09:28.606 "uuid": "f4bbea06-c0a8-4d00-a011-43822f5b5e98", 00:09:28.606 "assigned_rate_limits": { 00:09:28.606 "rw_ios_per_sec": 0, 00:09:28.606 "rw_mbytes_per_sec": 0, 00:09:28.606 "r_mbytes_per_sec": 0, 00:09:28.606 "w_mbytes_per_sec": 0 00:09:28.606 }, 00:09:28.606 "claimed": false, 00:09:28.606 "zoned": false, 00:09:28.606 "supported_io_types": { 00:09:28.606 "read": true, 00:09:28.606 "write": true, 00:09:28.606 "unmap": true, 00:09:28.606 "flush": true, 00:09:28.606 "reset": true, 00:09:28.606 "nvme_admin": true, 00:09:28.606 "nvme_io": true, 00:09:28.606 "nvme_io_md": false, 00:09:28.606 "write_zeroes": true, 00:09:28.606 "zcopy": false, 00:09:28.606 "get_zone_info": false, 00:09:28.606 "zone_management": false, 00:09:28.606 "zone_append": false, 00:09:28.606 "compare": true, 00:09:28.606 "compare_and_write": true, 00:09:28.606 "abort": true, 00:09:28.606 "seek_hole": false, 00:09:28.606 "seek_data": false, 00:09:28.606 "copy": true, 00:09:28.606 "nvme_iov_md": false 00:09:28.606 }, 00:09:28.606 "memory_domains": [ 00:09:28.606 { 00:09:28.606 "dma_device_id": "system", 00:09:28.606 "dma_device_type": 1 00:09:28.606 } 00:09:28.606 ], 00:09:28.606 "driver_specific": { 00:09:28.606 "nvme": [ 00:09:28.606 { 00:09:28.606 "trid": { 00:09:28.606 "trtype": "TCP", 00:09:28.606 "adrfam": "IPv4", 00:09:28.606 "traddr": "10.0.0.2", 00:09:28.606 "trsvcid": "4420", 00:09:28.606 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:28.606 }, 00:09:28.606 "ctrlr_data": { 00:09:28.606 "cntlid": 1, 00:09:28.606 "vendor_id": "0x8086", 00:09:28.606 "model_number": "SPDK bdev Controller", 00:09:28.606 "serial_number": "SPDK0", 00:09:28.606 "firmware_revision": "24.09", 00:09:28.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:28.606 "oacs": { 00:09:28.606 "security": 0, 00:09:28.606 "format": 0, 00:09:28.606 "firmware": 0, 00:09:28.606 "ns_manage": 0 00:09:28.606 }, 00:09:28.606 "multi_ctrlr": true, 00:09:28.606 "ana_reporting": false 00:09:28.606 }, 00:09:28.606 "vs": { 00:09:28.606 "nvme_version": "1.3" 00:09:28.606 }, 00:09:28.606 "ns_data": { 00:09:28.606 "id": 1, 00:09:28.606 "can_share": true 00:09:28.606 } 00:09:28.606 } 00:09:28.606 ], 00:09:28.606 "mp_policy": "active_passive" 00:09:28.606 } 00:09:28.606 } 00:09:28.606 ] 00:09:28.606 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3757859 00:09:28.606 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:28.606 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:28.606 Running I/O for 10 seconds... 00:09:29.544 Latency(us) 00:09:29.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.544 Nvme0n1 : 1.00 23102.00 90.24 0.00 0.00 0.00 0.00 0.00 00:09:29.544 =================================================================================================================== 00:09:29.544 Total : 23102.00 90.24 0.00 0.00 0.00 0.00 0.00 00:09:29.544 00:09:30.484 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bb7e5c86-11c4-47ab-985a-81d4f776f32c 00:09:30.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.743 Nvme0n1 : 2.00 23259.00 90.86 0.00 0.00 0.00 0.00 0.00 00:09:30.743 =================================================================================================================== 00:09:30.743 Total : 23259.00 90.86 0.00 0.00 0.00 0.00 0.00 00:09:30.743 00:09:30.743 true 00:09:30.743 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb7e5c86-11c4-47ab-985a-81d4f776f32c 00:09:30.743 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:31.003 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:31.003 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:31.003 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3757859 00:09:31.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.571 Nvme0n1 : 3.00 23332.67 91.14 0.00 0.00 0.00 0.00 0.00 00:09:31.571 =================================================================================================================== 00:09:31.571 Total : 23332.67 91.14 0.00 0.00 0.00 0.00 0.00 00:09:31.571 00:09:32.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.508 Nvme0n1 : 4.00 23393.50 91.38 0.00 0.00 0.00 0.00 0.00 00:09:32.508 =================================================================================================================== 00:09:32.508 Total : 23393.50 91.38 0.00 0.00 0.00 0.00 0.00 00:09:32.508 00:09:33.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.887 Nvme0n1 : 5.00 23441.20 91.57 0.00 0.00 0.00 0.00 0.00 00:09:33.887 =================================================================================================================== 00:09:33.887 Total : 23441.20 91.57 0.00 0.00 0.00 0.00 0.00 00:09:33.887 00:09:34.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.825 Nvme0n1 : 6.00 23485.00 91.74 0.00 0.00 0.00 0.00 0.00 00:09:34.825 =================================================================================================================== 00:09:34.825 Total : 23485.00 91.74 0.00 0.00 0.00 0.00 0.00 00:09:34.825 00:09:35.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.762 Nvme0n1 : 7.00 23514.00 91.85 0.00 0.00 0.00 0.00 0.00 00:09:35.762 =================================================================================================================== 00:09:35.762 Total : 23514.00 91.85 0.00 0.00 0.00 0.00 0.00 00:09:35.762 00:09:36.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.700 Nvme0n1 : 8.00 23528.75 91.91 0.00 0.00 0.00 0.00 0.00 00:09:36.700 =================================================================================================================== 00:09:36.700 Total : 23528.75 91.91 0.00 0.00 0.00 0.00 0.00 00:09:36.700 00:09:37.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.638 Nvme0n1 : 9.00 23543.78 91.97 0.00 0.00 0.00 0.00 0.00 00:09:37.638 =================================================================================================================== 00:09:37.638 Total : 23543.78 91.97 0.00 0.00 0.00 0.00 0.00 00:09:37.638 00:09:38.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.576 Nvme0n1 : 10.00 23568.60 92.06 0.00 0.00 0.00 0.00 0.00 00:09:38.576 =================================================================================================================== 00:09:38.576 Total : 23568.60 92.06 0.00 0.00 0.00 0.00 0.00 00:09:38.576 00:09:38.576 00:09:38.576 Latency(us) 00:09:38.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.576 Nvme0n1 : 10.01 23568.26 92.06 0.00 0.00 5427.21 4089.45 15518.92 00:09:38.576 =================================================================================================================== 00:09:38.576 Total : 23568.26 92.06 0.00 0.00 5427.21 4089.45 15518.92 00:09:38.576 0 00:09:38.576 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3757701 00:09:38.576 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3757701 ']' 00:09:38.576 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3757701 00:09:38.576 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:38.576 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.576 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3757701 00:09:38.836 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:38.836 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:38.836 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3757701' 00:09:38.836 killing process with pid 3757701 00:09:38.836 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3757701 00:09:38.836 Received shutdown signal, test time was about 10.000000 seconds 00:09:38.836 00:09:38.836 Latency(us) 00:09:38.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.836 =================================================================================================================== 00:09:38.836 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:38.836 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3757701 00:09:38.836 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:39.095 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:39.355 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb7e5c86-11c4-47ab-985a-81d4f776f32c 00:09:39.355 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:39.355 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:39.355 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:39.355 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3754473 00:09:39.355 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3754473 00:09:39.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3754473 Killed "${NVMF_APP[@]}" "$@" 00:09:39.355 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:39.355 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:39.355 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:39.355 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:39.355 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:39.355 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3759712 00:09:39.355 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3759712 00:09:39.355 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3759712 ']' 00:09:39.355 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.355 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.355 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.355 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.355 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:39.355 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:39.615 [2024-07-25 10:24:43.085220] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:39.615 [2024-07-25 10:24:43.085274] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.615 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.615 [2024-07-25 10:24:43.160842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.615 [2024-07-25 10:24:43.232937] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.615 [2024-07-25 10:24:43.232978] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.615 [2024-07-25 10:24:43.232987] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.615 [2024-07-25 10:24:43.232996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.615 [2024-07-25 10:24:43.233007] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.615 [2024-07-25 10:24:43.233029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.184 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.184 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:40.184 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:40.184 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:40.184 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:40.443 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.443 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:40.443 [2024-07-25 10:24:44.066043] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:40.443 [2024-07-25 10:24:44.066123] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:40.443 [2024-07-25 10:24:44.066147] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:40.443 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:40.443 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f4bbea06-c0a8-4d00-a011-43822f5b5e98 00:09:40.443 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f4bbea06-c0a8-4d00-a011-43822f5b5e98 00:09:40.443 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:40.443 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:40.443 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:40.443 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:40.444 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:40.703 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f4bbea06-c0a8-4d00-a011-43822f5b5e98 -t 2000 00:09:40.962 [ 00:09:40.962 { 00:09:40.962 "name": "f4bbea06-c0a8-4d00-a011-43822f5b5e98", 00:09:40.962 "aliases": [ 00:09:40.962 "lvs/lvol" 00:09:40.962 ], 00:09:40.962 "product_name": "Logical Volume", 00:09:40.962 "block_size": 4096, 00:09:40.962 "num_blocks": 38912, 00:09:40.962 "uuid": "f4bbea06-c0a8-4d00-a011-43822f5b5e98", 00:09:40.962 "assigned_rate_limits": { 00:09:40.962 "rw_ios_per_sec": 0, 00:09:40.962 "rw_mbytes_per_sec": 0, 00:09:40.962 "r_mbytes_per_sec": 0, 00:09:40.962 "w_mbytes_per_sec": 0 00:09:40.962 }, 00:09:40.962 "claimed": false, 00:09:40.962 "zoned": false, 00:09:40.962 "supported_io_types": { 00:09:40.962 "read": true, 00:09:40.962 "write": true, 00:09:40.962 "unmap": true, 00:09:40.962 "flush": false, 00:09:40.962 "reset": true, 00:09:40.962 "nvme_admin": false, 00:09:40.962 "nvme_io": false, 00:09:40.962 "nvme_io_md": false, 00:09:40.962 "write_zeroes": true, 00:09:40.962 "zcopy": false, 00:09:40.962 "get_zone_info": false, 00:09:40.962 "zone_management": false, 00:09:40.962 "zone_append": false, 00:09:40.962 "compare": false, 00:09:40.962 "compare_and_write": false, 00:09:40.962 "abort": false, 00:09:40.962 "seek_hole": true, 00:09:40.962 "seek_data": true, 00:09:40.962 "copy": false, 00:09:40.962 "nvme_iov_md": false 00:09:40.962 }, 00:09:40.962 "driver_specific": { 00:09:40.962 "lvol": { 00:09:40.962 "lvol_store_uuid": "bb7e5c86-11c4-47ab-985a-81d4f776f32c", 00:09:40.962 "base_bdev": "aio_bdev", 00:09:40.962 "thin_provision": false, 00:09:40.962 "num_allocated_clusters": 38, 00:09:40.962 "snapshot": false, 00:09:40.962 "clone": false, 00:09:40.962 "esnap_clone": false 00:09:40.962 } 00:09:40.962 } 00:09:40.962 } 00:09:40.962 ] 00:09:40.963 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:40.963 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb7e5c86-11c4-47ab-985a-81d4f776f32c 00:09:40.963 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:40.963 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:40.963 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb7e5c86-11c4-47ab-985a-81d4f776f32c 00:09:40.963 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:41.222 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:41.222 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:41.222 [2024-07-25 10:24:44.898342] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:41.516 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb7e5c86-11c4-47ab-985a-81d4f776f32c 00:09:41.516 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:41.516 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb7e5c86-11c4-47ab-985a-81d4f776f32c 00:09:41.516 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.516 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.516 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.516 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.516 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.516 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.516 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.516 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:41.516 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb7e5c86-11c4-47ab-985a-81d4f776f32c 00:09:41.516 request: 00:09:41.516 { 00:09:41.516 "uuid": "bb7e5c86-11c4-47ab-985a-81d4f776f32c", 00:09:41.516 "method": "bdev_lvol_get_lvstores", 00:09:41.516 "req_id": 1 00:09:41.516 } 00:09:41.516 Got JSON-RPC error response 00:09:41.516 response: 00:09:41.516 { 00:09:41.516 "code": -19, 00:09:41.516 "message": "No such device" 00:09:41.516 } 00:09:41.516 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:41.516 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:41.516 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:41.516 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:41.516 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:41.775 aio_bdev 00:09:41.775 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f4bbea06-c0a8-4d00-a011-43822f5b5e98 00:09:41.775 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f4bbea06-c0a8-4d00-a011-43822f5b5e98 00:09:41.775 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.775 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:41.775 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.775 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.775 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:41.775 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f4bbea06-c0a8-4d00-a011-43822f5b5e98 -t 2000 00:09:42.035 [ 00:09:42.035 { 00:09:42.035 "name": "f4bbea06-c0a8-4d00-a011-43822f5b5e98", 00:09:42.035 "aliases": [ 00:09:42.035 "lvs/lvol" 00:09:42.035 ], 00:09:42.035 "product_name": "Logical Volume", 00:09:42.035 "block_size": 4096, 00:09:42.035 "num_blocks": 38912, 00:09:42.035 "uuid": "f4bbea06-c0a8-4d00-a011-43822f5b5e98", 00:09:42.035 "assigned_rate_limits": { 00:09:42.035 "rw_ios_per_sec": 0, 00:09:42.035 "rw_mbytes_per_sec": 0, 00:09:42.035 "r_mbytes_per_sec": 0, 00:09:42.035 "w_mbytes_per_sec": 0 00:09:42.035 }, 00:09:42.035 "claimed": false, 00:09:42.035 "zoned": false, 00:09:42.035 "supported_io_types": { 00:09:42.035 "read": true, 00:09:42.035 "write": true, 00:09:42.035 "unmap": true, 00:09:42.035 "flush": false, 00:09:42.035 "reset": true, 00:09:42.035 "nvme_admin": false, 00:09:42.035 "nvme_io": false, 00:09:42.035 "nvme_io_md": false, 00:09:42.035 "write_zeroes": true, 00:09:42.035 "zcopy": false, 00:09:42.035 "get_zone_info": false, 00:09:42.035 "zone_management": false, 00:09:42.035 "zone_append": false, 00:09:42.035 "compare": false, 00:09:42.035 "compare_and_write": false, 00:09:42.035 "abort": false, 00:09:42.035 "seek_hole": true, 00:09:42.035 "seek_data": true, 00:09:42.035 "copy": false, 00:09:42.035 "nvme_iov_md": false 00:09:42.035 }, 00:09:42.035 "driver_specific": { 00:09:42.035 "lvol": { 00:09:42.035 "lvol_store_uuid": "bb7e5c86-11c4-47ab-985a-81d4f776f32c", 00:09:42.035 "base_bdev": "aio_bdev", 00:09:42.035 "thin_provision": false, 00:09:42.035 "num_allocated_clusters": 38, 00:09:42.035 "snapshot": false, 00:09:42.035 "clone": false, 00:09:42.035 "esnap_clone": false 00:09:42.035 } 00:09:42.035 } 00:09:42.035 } 00:09:42.035 ] 00:09:42.035 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:42.035 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:42.035 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb7e5c86-11c4-47ab-985a-81d4f776f32c 00:09:42.294 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:42.294 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb7e5c86-11c4-47ab-985a-81d4f776f32c 00:09:42.294 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:42.294 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:42.294 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f4bbea06-c0a8-4d00-a011-43822f5b5e98 00:09:42.553 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bb7e5c86-11c4-47ab-985a-81d4f776f32c 00:09:42.812 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:43.071 00:09:43.071 real 0m17.505s 00:09:43.071 user 0m43.497s 00:09:43.071 sys 0m5.024s 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:43.071 ************************************ 00:09:43.071 END TEST lvs_grow_dirty 00:09:43.071 ************************************ 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:43.071 nvmf_trace.0 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:43.071 rmmod nvme_tcp 00:09:43.071 rmmod nvme_fabrics 00:09:43.071 rmmod nvme_keyring 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3759712 ']' 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3759712 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3759712 ']' 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3759712 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3759712 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3759712' 00:09:43.071 killing process with pid 3759712 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3759712 00:09:43.071 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3759712 00:09:43.331 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:43.331 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:43.331 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:43.331 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:43.331 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:43.331 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.331 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.331 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:45.866 00:09:45.866 real 0m43.883s 00:09:45.866 user 1m4.301s 00:09:45.866 sys 0m12.813s 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.866 ************************************ 00:09:45.866 END TEST nvmf_lvs_grow 00:09:45.866 ************************************ 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.866 ************************************ 00:09:45.866 START TEST nvmf_bdev_io_wait 00:09:45.866 ************************************ 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:45.866 * Looking for test storage... 00:09:45.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:45.866 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:52.436 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:52.436 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:52.436 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:52.437 Found net devices under 0000:af:00.0: cvl_0_0 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:52.437 Found net devices under 0000:af:00.1: cvl_0_1 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:52.437 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.437 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.437 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.437 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:52.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:09:52.437 00:09:52.437 --- 10.0.0.2 ping statistics --- 00:09:52.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.437 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:09:52.437 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:09:52.437 00:09:52.437 --- 10.0.0.1 ping statistics --- 00:09:52.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.437 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:09:52.437 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.437 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:52.437 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:52.437 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.437 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:52.437 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:52.437 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.437 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:52.437 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:52.697 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:52.697 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:52.697 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:52.697 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.697 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3764238 00:09:52.697 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:52.697 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3764238 00:09:52.697 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3764238 ']' 00:09:52.697 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.697 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.697 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.697 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.697 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.697 [2024-07-25 10:24:56.222465] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:52.697 [2024-07-25 10:24:56.222511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.697 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.697 [2024-07-25 10:24:56.295115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:52.697 [2024-07-25 10:24:56.367722] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.697 [2024-07-25 10:24:56.367765] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.697 [2024-07-25 10:24:56.367775] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.697 [2024-07-25 10:24:56.367784] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.697 [2024-07-25 10:24:56.367791] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.697 [2024-07-25 10:24:56.367838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.697 [2024-07-25 10:24:56.367854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.697 [2024-07-25 10:24:56.367940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:52.697 [2024-07-25 10:24:56.367942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.636 [2024-07-25 10:24:57.141108] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.636 Malloc0 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.636 [2024-07-25 10:24:57.212917] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3764385 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3764388 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.636 { 00:09:53.636 "params": { 00:09:53.636 "name": "Nvme$subsystem", 00:09:53.636 "trtype": "$TEST_TRANSPORT", 00:09:53.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.636 "adrfam": "ipv4", 00:09:53.636 "trsvcid": "$NVMF_PORT", 00:09:53.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.636 "hdgst": ${hdgst:-false}, 00:09:53.636 "ddgst": ${ddgst:-false} 00:09:53.636 }, 00:09:53.636 "method": "bdev_nvme_attach_controller" 00:09:53.636 } 00:09:53.636 EOF 00:09:53.636 )") 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3764391 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.636 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.636 { 00:09:53.636 "params": { 00:09:53.636 "name": "Nvme$subsystem", 00:09:53.636 "trtype": "$TEST_TRANSPORT", 00:09:53.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.636 "adrfam": "ipv4", 00:09:53.636 "trsvcid": "$NVMF_PORT", 00:09:53.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.636 "hdgst": ${hdgst:-false}, 00:09:53.636 "ddgst": ${ddgst:-false} 00:09:53.636 }, 00:09:53.636 "method": "bdev_nvme_attach_controller" 00:09:53.636 } 00:09:53.636 EOF 00:09:53.636 )") 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3764395 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.637 { 00:09:53.637 "params": { 00:09:53.637 "name": "Nvme$subsystem", 00:09:53.637 "trtype": "$TEST_TRANSPORT", 00:09:53.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.637 "adrfam": "ipv4", 00:09:53.637 "trsvcid": "$NVMF_PORT", 00:09:53.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.637 "hdgst": ${hdgst:-false}, 00:09:53.637 "ddgst": ${ddgst:-false} 00:09:53.637 }, 00:09:53.637 "method": "bdev_nvme_attach_controller" 00:09:53.637 } 00:09:53.637 EOF 00:09:53.637 )") 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.637 { 00:09:53.637 "params": { 00:09:53.637 "name": "Nvme$subsystem", 00:09:53.637 "trtype": "$TEST_TRANSPORT", 00:09:53.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.637 "adrfam": "ipv4", 00:09:53.637 "trsvcid": "$NVMF_PORT", 00:09:53.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.637 "hdgst": ${hdgst:-false}, 00:09:53.637 "ddgst": ${ddgst:-false} 00:09:53.637 }, 00:09:53.637 "method": "bdev_nvme_attach_controller" 00:09:53.637 } 00:09:53.637 EOF 00:09:53.637 )") 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3764385 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.637 "params": { 00:09:53.637 "name": "Nvme1", 00:09:53.637 "trtype": "tcp", 00:09:53.637 "traddr": "10.0.0.2", 00:09:53.637 "adrfam": "ipv4", 00:09:53.637 "trsvcid": "4420", 00:09:53.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.637 "hdgst": false, 00:09:53.637 "ddgst": false 00:09:53.637 }, 00:09:53.637 "method": "bdev_nvme_attach_controller" 00:09:53.637 }' 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.637 "params": { 00:09:53.637 "name": "Nvme1", 00:09:53.637 "trtype": "tcp", 00:09:53.637 "traddr": "10.0.0.2", 00:09:53.637 "adrfam": "ipv4", 00:09:53.637 "trsvcid": "4420", 00:09:53.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.637 "hdgst": false, 00:09:53.637 "ddgst": false 00:09:53.637 }, 00:09:53.637 "method": "bdev_nvme_attach_controller" 00:09:53.637 }' 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.637 "params": { 00:09:53.637 "name": "Nvme1", 00:09:53.637 "trtype": "tcp", 00:09:53.637 "traddr": "10.0.0.2", 00:09:53.637 "adrfam": "ipv4", 00:09:53.637 "trsvcid": "4420", 00:09:53.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.637 "hdgst": false, 00:09:53.637 "ddgst": false 00:09:53.637 }, 00:09:53.637 "method": "bdev_nvme_attach_controller" 00:09:53.637 }' 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:53.637 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.637 "params": { 00:09:53.637 "name": "Nvme1", 00:09:53.637 "trtype": "tcp", 00:09:53.637 "traddr": "10.0.0.2", 00:09:53.637 "adrfam": "ipv4", 00:09:53.637 "trsvcid": "4420", 00:09:53.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.637 "hdgst": false, 00:09:53.637 "ddgst": false 00:09:53.637 }, 00:09:53.637 "method": "bdev_nvme_attach_controller" 00:09:53.637 }' 00:09:53.637 [2024-07-25 10:24:57.264027] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:53.637 [2024-07-25 10:24:57.264082] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:53.637 [2024-07-25 10:24:57.266623] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:53.637 [2024-07-25 10:24:57.266672] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:53.637 [2024-07-25 10:24:57.269612] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:53.637 [2024-07-25 10:24:57.269659] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:53.637 [2024-07-25 10:24:57.270599] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:09:53.637 [2024-07-25 10:24:57.270643] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:53.637 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.897 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.897 [2024-07-25 10:24:57.449752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.897 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.897 [2024-07-25 10:24:57.523721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:53.897 [2024-07-25 10:24:57.541482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.897 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.156 [2024-07-25 10:24:57.616694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:54.156 [2024-07-25 10:24:57.642763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.156 [2024-07-25 10:24:57.688125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.156 [2024-07-25 10:24:57.736839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:54.156 [2024-07-25 10:24:57.763863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:54.156 Running I/O for 1 seconds... 00:09:54.156 Running I/O for 1 seconds... 00:09:54.416 Running I/O for 1 seconds... 00:09:54.416 Running I/O for 1 seconds... 00:09:55.354 00:09:55.354 Latency(us) 00:09:55.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.354 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:55.354 Nvme1n1 : 1.00 256308.59 1001.21 0.00 0.00 498.11 203.98 668.47 00:09:55.354 =================================================================================================================== 00:09:55.354 Total : 256308.59 1001.21 0.00 0.00 498.11 203.98 668.47 00:09:55.354 00:09:55.354 Latency(us) 00:09:55.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.354 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:55.354 Nvme1n1 : 1.01 8367.31 32.68 0.00 0.00 15175.68 5059.38 22858.96 00:09:55.354 =================================================================================================================== 00:09:55.354 Total : 8367.31 32.68 0.00 0.00 15175.68 5059.38 22858.96 00:09:55.354 00:09:55.354 Latency(us) 00:09:55.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.354 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:55.354 Nvme1n1 : 1.00 8154.98 31.86 0.00 0.00 15652.84 4902.09 31037.85 00:09:55.354 =================================================================================================================== 00:09:55.354 Total : 8154.98 31.86 0.00 0.00 15652.84 4902.09 31037.85 00:09:55.354 00:09:55.354 Latency(us) 00:09:55.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.354 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:55.354 Nvme1n1 : 1.01 11289.53 44.10 0.00 0.00 11303.13 6003.10 23907.53 00:09:55.354 =================================================================================================================== 00:09:55.354 Total : 11289.53 44.10 0.00 0.00 11303.13 6003.10 23907.53 00:09:55.613 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3764388 00:09:55.613 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3764391 00:09:55.613 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3764395 00:09:55.613 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.613 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.613 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.613 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.613 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:55.613 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:55.613 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:55.613 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:55.613 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:55.613 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:55.613 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:55.613 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:55.613 rmmod nvme_tcp 00:09:55.613 rmmod nvme_fabrics 00:09:55.613 rmmod nvme_keyring 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3764238 ']' 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3764238 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3764238 ']' 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3764238 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3764238 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3764238' 00:09:55.873 killing process with pid 3764238 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3764238 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3764238 00:09:55.873 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:55.874 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:55.874 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:55.874 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:55.874 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:55.874 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.874 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.874 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:58.412 00:09:58.412 real 0m12.543s 00:09:58.412 user 0m19.887s 00:09:58.412 sys 0m7.225s 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:58.412 ************************************ 00:09:58.412 END TEST nvmf_bdev_io_wait 00:09:58.412 ************************************ 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.412 ************************************ 00:09:58.412 START TEST nvmf_queue_depth 00:09:58.412 ************************************ 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:58.412 * Looking for test storage... 00:09:58.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.412 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:58.413 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:05.105 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:05.105 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.105 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:05.106 Found net devices under 0000:af:00.0: cvl_0_0 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:05.106 Found net devices under 0000:af:00.1: cvl_0_1 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:05.106 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:05.366 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:05.366 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:05.366 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:05.366 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:05.366 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:05.366 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:05.366 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:05.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:10:05.366 00:10:05.366 --- 10.0.0.2 ping statistics --- 00:10:05.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.366 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:10:05.366 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:05.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:10:05.366 00:10:05.366 --- 10.0.0.1 ping statistics --- 00:10:05.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.366 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3768511 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3768511 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3768511 ']' 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.366 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:05.626 [2024-07-25 10:25:09.103640] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:10:05.626 [2024-07-25 10:25:09.103683] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.626 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.626 [2024-07-25 10:25:09.177379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.626 [2024-07-25 10:25:09.247893] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.626 [2024-07-25 10:25:09.247934] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.626 [2024-07-25 10:25:09.247943] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.626 [2024-07-25 10:25:09.247951] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.626 [2024-07-25 10:25:09.247958] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.626 [2024-07-25 10:25:09.247988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.202 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.202 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:06.202 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:06.202 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:06.202 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:06.461 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.461 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:06.461 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.461 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:06.461 [2024-07-25 10:25:09.946517] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.461 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.461 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:06.461 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.461 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:06.461 Malloc0 00:10:06.461 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.461 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:06.461 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.461 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:06.461 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.461 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:06.461 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.461 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:06.461 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.461 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.461 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.461 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:06.461 [2024-07-25 10:25:10.012347] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.461 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.461 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3768786 00:10:06.461 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:06.461 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:06.461 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3768786 /var/tmp/bdevperf.sock 00:10:06.461 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3768786 ']' 00:10:06.461 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:06.462 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.462 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:06.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:06.462 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.462 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:06.462 [2024-07-25 10:25:10.063158] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:10:06.462 [2024-07-25 10:25:10.063208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768786 ] 00:10:06.462 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.462 [2024-07-25 10:25:10.137869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.721 [2024-07-25 10:25:10.214872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.289 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.289 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:07.289 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:07.290 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.290 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.290 NVMe0n1 00:10:07.290 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.290 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:07.550 Running I/O for 10 seconds... 00:10:17.533 00:10:17.533 Latency(us) 00:10:17.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.533 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:17.533 Verification LBA range: start 0x0 length 0x4000 00:10:17.533 NVMe0n1 : 10.06 13226.93 51.67 0.00 0.00 77187.60 18559.80 53477.38 00:10:17.533 =================================================================================================================== 00:10:17.533 Total : 13226.93 51.67 0.00 0.00 77187.60 18559.80 53477.38 00:10:17.533 0 00:10:17.533 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3768786 00:10:17.533 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3768786 ']' 00:10:17.533 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3768786 00:10:17.533 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:17.533 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:17.533 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3768786 00:10:17.533 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:17.533 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:17.533 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3768786' 00:10:17.533 killing process with pid 3768786 00:10:17.533 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3768786 00:10:17.533 Received shutdown signal, test time was about 10.000000 seconds 00:10:17.533 00:10:17.533 Latency(us) 00:10:17.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.533 =================================================================================================================== 00:10:17.533 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:17.533 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3768786 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:17.793 rmmod nvme_tcp 00:10:17.793 rmmod nvme_fabrics 00:10:17.793 rmmod nvme_keyring 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3768511 ']' 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3768511 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3768511 ']' 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3768511 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:17.793 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3768511 00:10:18.053 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:18.053 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:18.053 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3768511' 00:10:18.053 killing process with pid 3768511 00:10:18.053 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3768511 00:10:18.053 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3768511 00:10:18.053 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:18.053 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:18.053 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:18.053 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:18.053 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:18.053 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.053 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.053 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:20.586 00:10:20.586 real 0m22.063s 00:10:20.586 user 0m24.913s 00:10:20.586 sys 0m7.427s 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.586 ************************************ 00:10:20.586 END TEST nvmf_queue_depth 00:10:20.586 ************************************ 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:20.586 ************************************ 00:10:20.586 START TEST nvmf_target_multipath 00:10:20.586 ************************************ 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:20.586 * Looking for test storage... 00:10:20.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.586 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:20.587 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:27.188 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:27.188 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:27.188 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:27.189 Found net devices under 0000:af:00.0: cvl_0_0 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:27.189 Found net devices under 0000:af:00.1: cvl_0_1 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:27.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:10:27.189 00:10:27.189 --- 10.0.0.2 ping statistics --- 00:10:27.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.189 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:10:27.189 00:10:27.189 --- 10.0.0.1 ping statistics --- 00:10:27.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.189 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:27.189 only one NIC for nvmf test 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:27.189 rmmod nvme_tcp 00:10:27.189 rmmod nvme_fabrics 00:10:27.189 rmmod nvme_keyring 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.189 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.190 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.095 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.096 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.359 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:29.359 00:10:29.359 real 0m8.941s 00:10:29.359 user 0m1.795s 00:10:29.359 sys 0m5.165s 00:10:29.359 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.359 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:29.359 ************************************ 00:10:29.359 END TEST nvmf_target_multipath 00:10:29.359 ************************************ 00:10:29.359 10:25:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:29.359 10:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:29.359 10:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.359 10:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.359 ************************************ 00:10:29.359 START TEST nvmf_zcopy 00:10:29.359 ************************************ 00:10:29.359 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:29.359 * Looking for test storage... 00:10:29.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.359 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.359 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:29.359 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.359 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.359 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.359 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.359 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.359 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.359 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:10:29.360 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:35.934 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:35.934 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:35.934 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:35.935 Found net devices under 0000:af:00.0: cvl_0_0 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:35.935 Found net devices under 0000:af:00.1: cvl_0_1 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:35.935 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:36.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:10:36.194 00:10:36.194 --- 10.0.0.2 ping statistics --- 00:10:36.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.194 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:10:36.194 00:10:36.194 --- 10.0.0.1 ping statistics --- 00:10:36.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.194 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:36.194 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:36.454 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:36.454 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:36.454 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:36.454 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.454 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3777994 00:10:36.454 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3777994 00:10:36.454 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3777994 ']' 00:10:36.454 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.454 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:36.454 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.454 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:36.454 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.454 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:36.454 [2024-07-25 10:25:39.972281] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:10:36.454 [2024-07-25 10:25:39.972331] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.454 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.454 [2024-07-25 10:25:40.046959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.454 [2024-07-25 10:25:40.122783] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.454 [2024-07-25 10:25:40.122819] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.454 [2024-07-25 10:25:40.122832] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.454 [2024-07-25 10:25:40.122842] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.454 [2024-07-25 10:25:40.122850] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.454 [2024-07-25 10:25:40.122871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.393 [2024-07-25 10:25:40.812608] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.393 [2024-07-25 10:25:40.828774] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.393 malloc0 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:37.393 { 00:10:37.393 "params": { 00:10:37.393 "name": "Nvme$subsystem", 00:10:37.393 "trtype": "$TEST_TRANSPORT", 00:10:37.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:37.393 "adrfam": "ipv4", 00:10:37.393 "trsvcid": "$NVMF_PORT", 00:10:37.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:37.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:37.393 "hdgst": ${hdgst:-false}, 00:10:37.393 "ddgst": ${ddgst:-false} 00:10:37.393 }, 00:10:37.393 "method": "bdev_nvme_attach_controller" 00:10:37.393 } 00:10:37.393 EOF 00:10:37.393 )") 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:37.393 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:37.393 "params": { 00:10:37.393 "name": "Nvme1", 00:10:37.393 "trtype": "tcp", 00:10:37.393 "traddr": "10.0.0.2", 00:10:37.393 "adrfam": "ipv4", 00:10:37.393 "trsvcid": "4420", 00:10:37.393 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.393 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:37.393 "hdgst": false, 00:10:37.393 "ddgst": false 00:10:37.393 }, 00:10:37.393 "method": "bdev_nvme_attach_controller" 00:10:37.393 }' 00:10:37.393 [2024-07-25 10:25:40.919411] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:10:37.393 [2024-07-25 10:25:40.919464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3778221 ] 00:10:37.393 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.393 [2024-07-25 10:25:40.989147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.393 [2024-07-25 10:25:41.059869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.653 Running I/O for 10 seconds... 00:10:49.868 00:10:49.868 Latency(us) 00:10:49.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.868 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:49.868 Verification LBA range: start 0x0 length 0x1000 00:10:49.868 Nvme1n1 : 10.01 8872.33 69.32 0.00 0.00 14387.13 2333.08 29989.27 00:10:49.868 =================================================================================================================== 00:10:49.868 Total : 8872.33 69.32 0.00 0.00 14387.13 2333.08 29989.27 00:10:49.868 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3779931 00:10:49.868 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:49.868 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.868 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:49.868 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:49.868 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:49.868 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:49.869 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:49.869 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:49.869 { 00:10:49.869 "params": { 00:10:49.869 "name": "Nvme$subsystem", 00:10:49.869 "trtype": "$TEST_TRANSPORT", 00:10:49.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:49.869 "adrfam": "ipv4", 00:10:49.869 "trsvcid": "$NVMF_PORT", 00:10:49.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:49.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:49.869 "hdgst": ${hdgst:-false}, 00:10:49.869 "ddgst": ${ddgst:-false} 00:10:49.869 }, 00:10:49.869 "method": "bdev_nvme_attach_controller" 00:10:49.869 } 00:10:49.869 EOF 00:10:49.869 )") 00:10:49.869 [2024-07-25 10:25:51.555118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.555151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:49.869 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:49.869 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:49.869 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:49.869 "params": { 00:10:49.869 "name": "Nvme1", 00:10:49.869 "trtype": "tcp", 00:10:49.869 "traddr": "10.0.0.2", 00:10:49.869 "adrfam": "ipv4", 00:10:49.869 "trsvcid": "4420", 00:10:49.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:49.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:49.869 "hdgst": false, 00:10:49.869 "ddgst": false 00:10:49.869 }, 00:10:49.869 "method": "bdev_nvme_attach_controller" 00:10:49.869 }' 00:10:49.869 [2024-07-25 10:25:51.567121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.567136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.579149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.579163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.591179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.591191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.597796] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:10:49.869 [2024-07-25 10:25:51.597843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779931 ] 00:10:49.869 [2024-07-25 10:25:51.603211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.603224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.615244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.615257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.627277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.627289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.869 [2024-07-25 10:25:51.639309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.639321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.651340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.651353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.663369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.663381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.667433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.869 [2024-07-25 10:25:51.675403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.675417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.687436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.687449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.699468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.699481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.711505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.711529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.723535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.723547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.735566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.735579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.739289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.869 [2024-07-25 10:25:51.747597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.747611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.759638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.759661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.771665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.771679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.783694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.783709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.795731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.795762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.807779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.807792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.819795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.819817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.831845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.831865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.843867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.843882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.855903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.855919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.867931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.867946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.879960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.879973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.891994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.892006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.904032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.904048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.916060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.916076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.928093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.869 [2024-07-25 10:25:51.928105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.869 [2024-07-25 10:25:51.940128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:51.940141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:51.952165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:51.952180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:51.964196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:51.964207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:51.976228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:51.976240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:51.988261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:51.988273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.000293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.000306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.012336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.012355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 Running I/O for 5 seconds... 00:10:49.870 [2024-07-25 10:25:52.027691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.027711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.038915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.038935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.053455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.053475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.067291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.067311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.081667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.081687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.097215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.097236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.111311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.111331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.125098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.125118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.138973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.138994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.152831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.152852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.166629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.166654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.180514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.180534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.192136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.192157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.206139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.206159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.219400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.219420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.233761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.233780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.249350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.249378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.263286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.263309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.276537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.276557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.290752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.290772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.301428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.301448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.315094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.315114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.329369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.329388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.344770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.344790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.358356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.358376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.372352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.372371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.382936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.382956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.397506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.397526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.412896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.412916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.427481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.427505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.441522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.441543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.455015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.455036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.468541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.468561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.482367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.482388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.493643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.493663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.507602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.507622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.870 [2024-07-25 10:25:52.521396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.870 [2024-07-25 10:25:52.521417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.533081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.533102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.547219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.547240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.560737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.560757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.574271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.574290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.587638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.587659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.601440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.601460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.614475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.614495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.627966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.627986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.641670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.641690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.655188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.655208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.668470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.668490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.681944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.681964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.695364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.695384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.708689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.708709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.722461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.722481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.736408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.736427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.746753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.746773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.760664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.760684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.774211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.774231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.787817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.787837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.801526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.801547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.815194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.815214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.828677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.828697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.842125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.842145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.855440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.855461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.869617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.869637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.883249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.883268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.897071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.897090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.910757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.910778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.924487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.924507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.937894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.937914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.951461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.951483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.965133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.965153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.978307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.978329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:52.992338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:52.992360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:53.005792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:53.005819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:53.019523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:53.019544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:53.032885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:53.032905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:53.046402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:53.046423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:53.059798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:53.059819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:53.073225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:53.073245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:53.087120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:53.087140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:53.100598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:53.100619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:53.114292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:53.114313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:53.127508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:53.127527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:53.140823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:53.140844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:53.154686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.871 [2024-07-25 10:25:53.154707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.871 [2024-07-25 10:25:53.168017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.168037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.181256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.181281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.195198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.195219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.209017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.209037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.222288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.222308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.235946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.235967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.249572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.249593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.262864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.262885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.276655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.276675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.290104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.290124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.303705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.303731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.317799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.317821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.328221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.328241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.342170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.342191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.355560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.355580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.369378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.369398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.383247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.383268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.396864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.396885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.410991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.411010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.426623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.426645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.440643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.440668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.454136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.454156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.467856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.467877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.480957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.480977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.494862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.494882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.508553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.508575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.522050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.522070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.535533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.535553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.549328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.549347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.872 [2024-07-25 10:25:53.563340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.872 [2024-07-25 10:25:53.563361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.131 [2024-07-25 10:25:53.576764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.131 [2024-07-25 10:25:53.576785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.591137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.591158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.602287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.602308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.616297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.616319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.629943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.629964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.643687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.643711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.657421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.657441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.672141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.672161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.687280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.687300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.701394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.701419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.713018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.713038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.726748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.726768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.741734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.741770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.756388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.756409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.770347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.770368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.783586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.783609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.797701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.797726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.811204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.811224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.132 [2024-07-25 10:25:53.825038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.132 [2024-07-25 10:25:53.825058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:53.839059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:53.839079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:53.852632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:53.852651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:53.867704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:53.867728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:53.882655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:53.882676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:53.896491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:53.896511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:53.910663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:53.910684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:53.921969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:53.921989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:53.935964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:53.935985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:53.949455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:53.949476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:53.961057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:53.961081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:53.974900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:53.974920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:53.988166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:53.988187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:54.001823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:54.001843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:54.015109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:54.015129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:54.028892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:54.028913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:54.042836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:54.042856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:54.057920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:54.057941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:54.072368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:54.072389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.392 [2024-07-25 10:25:54.086348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.392 [2024-07-25 10:25:54.086369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.097747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.097767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.112260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.112279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.122611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.122631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.136541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.136561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.149682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.149702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.163200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.163220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.177068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.177088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.190438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.190458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.203934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.203955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.217638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.217662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.231547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.231567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.242651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.242671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.256486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.256506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.270023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.270043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.283686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.283706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.297018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.297038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.310357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.310377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.323695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.323721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.337171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.337191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.652 [2024-07-25 10:25:54.350989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.652 [2024-07-25 10:25:54.351009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.912 [2024-07-25 10:25:54.364737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.912 [2024-07-25 10:25:54.364758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.912 [2024-07-25 10:25:54.377954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.912 [2024-07-25 10:25:54.377974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.912 [2024-07-25 10:25:54.391819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.912 [2024-07-25 10:25:54.391839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.912 [2024-07-25 10:25:54.405348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.912 [2024-07-25 10:25:54.405368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.912 [2024-07-25 10:25:54.418644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.912 [2024-07-25 10:25:54.418663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.912 [2024-07-25 10:25:54.431857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.912 [2024-07-25 10:25:54.431877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.912 [2024-07-25 10:25:54.445492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.912 [2024-07-25 10:25:54.445512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.912 [2024-07-25 10:25:54.459011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.912 [2024-07-25 10:25:54.459031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.912 [2024-07-25 10:25:54.472536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.912 [2024-07-25 10:25:54.472555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.912 [2024-07-25 10:25:54.486059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.912 [2024-07-25 10:25:54.486080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.912 [2024-07-25 10:25:54.499580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.912 [2024-07-25 10:25:54.499600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.912 [2024-07-25 10:25:54.512957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.913 [2024-07-25 10:25:54.512978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.913 [2024-07-25 10:25:54.526852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.913 [2024-07-25 10:25:54.526873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.913 [2024-07-25 10:25:54.540315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.913 [2024-07-25 10:25:54.540337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.913 [2024-07-25 10:25:54.554140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.913 [2024-07-25 10:25:54.554160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.913 [2024-07-25 10:25:54.567492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.913 [2024-07-25 10:25:54.567513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.913 [2024-07-25 10:25:54.581172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.913 [2024-07-25 10:25:54.581193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.913 [2024-07-25 10:25:54.594861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.913 [2024-07-25 10:25:54.594883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.913 [2024-07-25 10:25:54.608780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.913 [2024-07-25 10:25:54.608802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.620313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.620334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.633912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.633933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.647420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.647441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.660812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.660834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.674330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.674352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.687911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.687933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.701435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.701457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.715220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.715240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.728951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.728971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.742490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.742510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.756122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.756142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.769559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.769580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.783026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.783047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.796452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.796473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.809630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.809651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.823543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.823564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.837502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.837524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.851062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.851082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.173 [2024-07-25 10:25:54.864946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.173 [2024-07-25 10:25:54.864967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.433 [2024-07-25 10:25:54.878299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.433 [2024-07-25 10:25:54.878320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.433 [2024-07-25 10:25:54.891942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.433 [2024-07-25 10:25:54.891962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.433 [2024-07-25 10:25:54.905611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.433 [2024-07-25 10:25:54.905632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.433 [2024-07-25 10:25:54.919400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.433 [2024-07-25 10:25:54.919421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.433 [2024-07-25 10:25:54.930701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.434 [2024-07-25 10:25:54.930728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.434 [2024-07-25 10:25:54.944677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.434 [2024-07-25 10:25:54.944697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.434 [2024-07-25 10:25:54.957955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.434 [2024-07-25 10:25:54.957975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.434 [2024-07-25 10:25:54.972220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.434 [2024-07-25 10:25:54.972241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.434 [2024-07-25 10:25:54.985667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.434 [2024-07-25 10:25:54.985688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.434 [2024-07-25 10:25:54.999172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.434 [2024-07-25 10:25:54.999193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.434 [2024-07-25 10:25:55.013309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.434 [2024-07-25 10:25:55.013330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.434 [2024-07-25 10:25:55.027017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.434 [2024-07-25 10:25:55.027038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.434 [2024-07-25 10:25:55.040520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.434 [2024-07-25 10:25:55.040541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.434 [2024-07-25 10:25:55.054212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.434 [2024-07-25 10:25:55.054232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.434 [2024-07-25 10:25:55.068327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.434 [2024-07-25 10:25:55.068348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.434 [2024-07-25 10:25:55.081768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.434 [2024-07-25 10:25:55.081789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.434 [2024-07-25 10:25:55.095386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.434 [2024-07-25 10:25:55.095406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.434 [2024-07-25 10:25:55.110115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.434 [2024-07-25 10:25:55.110135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.434 [2024-07-25 10:25:55.125437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.434 [2024-07-25 10:25:55.125458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.693 [2024-07-25 10:25:55.139488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.693 [2024-07-25 10:25:55.139509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.693 [2024-07-25 10:25:55.150388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.693 [2024-07-25 10:25:55.150407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.693 [2024-07-25 10:25:55.164513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.693 [2024-07-25 10:25:55.164533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.693 [2024-07-25 10:25:55.178196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.693 [2024-07-25 10:25:55.178215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.693 [2024-07-25 10:25:55.189013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.693 [2024-07-25 10:25:55.189033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.693 [2024-07-25 10:25:55.202899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.693 [2024-07-25 10:25:55.202919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.694 [2024-07-25 10:25:55.216704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.694 [2024-07-25 10:25:55.216731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.694 [2024-07-25 10:25:55.227588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.694 [2024-07-25 10:25:55.227611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.694 [2024-07-25 10:25:55.242242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.694 [2024-07-25 10:25:55.242262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.694 [2024-07-25 10:25:55.253558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.694 [2024-07-25 10:25:55.253578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.694 [2024-07-25 10:25:55.267795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.694 [2024-07-25 10:25:55.267814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.694 [2024-07-25 10:25:55.282310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.694 [2024-07-25 10:25:55.282330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.694 [2024-07-25 10:25:55.298178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.694 [2024-07-25 10:25:55.298198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.694 [2024-07-25 10:25:55.311742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.694 [2024-07-25 10:25:55.311765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.694 [2024-07-25 10:25:55.325985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.694 [2024-07-25 10:25:55.326005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.694 [2024-07-25 10:25:55.337838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.694 [2024-07-25 10:25:55.337858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.694 [2024-07-25 10:25:55.351351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.694 [2024-07-25 10:25:55.351371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.694 [2024-07-25 10:25:55.365031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.694 [2024-07-25 10:25:55.365051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.694 [2024-07-25 10:25:55.378578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.694 [2024-07-25 10:25:55.378598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.694 [2024-07-25 10:25:55.393581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.694 [2024-07-25 10:25:55.393602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.409105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.409125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.422976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.422997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.436420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.436442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.449014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.449034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.463611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.463631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.479354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.479374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.494000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.494024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.510064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.510084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.525482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.525502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.540022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.540041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.555175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.555196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.569027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.569047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.584558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.584580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.598449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.598470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.612036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.612059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.625896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.625917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.639188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.639208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.991 [2024-07-25 10:25:55.652837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.991 [2024-07-25 10:25:55.652858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.666554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.666575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.680275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.680297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.693736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.693757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.707479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.707501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.721192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.721213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.734701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.734728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.748523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.748543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.761902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.761926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.775547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.775567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.789277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.789297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.803340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.803360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.813540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.813560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.827257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.827277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.840859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.840880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.854796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.854816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.868178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.868198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.881894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.881914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.895147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.895167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.909021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.909041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.922634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.922654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.936479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.936500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.251 [2024-07-25 10:25:55.947177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.251 [2024-07-25 10:25:55.947197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:55.961245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:55.961265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:55.974471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:55.974491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:55.988147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:55.988167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.001584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.001604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.015399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.015427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.026695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.026720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.040846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.040866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.054315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.054335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.067973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.067993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.081317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.081337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.095412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.095432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.108974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.108995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.122295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.122316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.136374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.136395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.149972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.149993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.164086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.164107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.175595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.175616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.189433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.189455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.511 [2024-07-25 10:25:56.203032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.511 [2024-07-25 10:25:56.203052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.216938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.216958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.230778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.230808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.244427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.244448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.258096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.258117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.272141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.272166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.283358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.283378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.297167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.297187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.310525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.310547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.323842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.323862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.337598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.337619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.351558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.351579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.365224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.365244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.378709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.378750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.392587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.392609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.406111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.406132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.419852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.419884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.432989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.433010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.446490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.446510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.460022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.460043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.771 [2024-07-25 10:25:56.473570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.771 [2024-07-25 10:25:56.473591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.487129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.487150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.500817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.500838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.514559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.514580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.527937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.527958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.541468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.541488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.554694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.554719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.568237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.568257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.581799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.581819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.595510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.595530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.608927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.608948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.622140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.622161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.635592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.635612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.649225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.649246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.663261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.663281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.674503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.674523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.689231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.689252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.705260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.705281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.719196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.719216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.031 [2024-07-25 10:25:56.732531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.031 [2024-07-25 10:25:56.732551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.745886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.745908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.759281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.759302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.772901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.772921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.786903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.786925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.797626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.797647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.811747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.811768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.825100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.825120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.838741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.838762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.852502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.852522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.863920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.863940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.877412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.877432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.891070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.891090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.906204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.906224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.920629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.920649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.934165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.934185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.947832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.947852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.960111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.960130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.974742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.974763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.291 [2024-07-25 10:25:56.988645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.291 [2024-07-25 10:25:56.988665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.001192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.001213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.016011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.016031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.031126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.031146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 00:10:53.551 Latency(us) 00:10:53.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.551 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:53.551 Nvme1n1 : 5.01 17228.55 134.60 0.00 0.00 7424.33 2202.01 19818.09 00:10:53.551 =================================================================================================================== 00:10:53.551 Total : 17228.55 134.60 0.00 0.00 7424.33 2202.01 19818.09 00:10:53.551 [2024-07-25 10:25:57.040974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.040993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.053003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.053021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.065040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.065054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.077069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.077085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.089098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.089111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.101127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.101139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.113158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.113171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.125190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.125204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.137223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.137237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.149253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.149264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.161289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.161301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.173319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.173331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.185350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.185362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.197382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.197395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 [2024-07-25 10:25:57.209413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.551 [2024-07-25 10:25:57.209424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3779931) - No such process 00:10:53.551 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3779931 00:10:53.551 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.551 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.551 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.551 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.551 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:53.551 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.551 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.551 delay0 00:10:53.551 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.551 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:53.551 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.551 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.551 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.551 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:53.810 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.810 [2024-07-25 10:25:57.379874] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:00.376 Initializing NVMe Controllers 00:11:00.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:00.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:00.376 Initialization complete. Launching workers. 00:11:00.376 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 85 00:11:00.376 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 372, failed to submit 33 00:11:00.376 success 186, unsuccess 186, failed 0 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:00.376 rmmod nvme_tcp 00:11:00.376 rmmod nvme_fabrics 00:11:00.376 rmmod nvme_keyring 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3777994 ']' 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3777994 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3777994 ']' 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3777994 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3777994 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3777994' 00:11:00.376 killing process with pid 3777994 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3777994 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3777994 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.376 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.282 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:02.282 00:11:02.282 real 0m33.004s 00:11:02.282 user 0m42.223s 00:11:02.282 sys 0m13.393s 00:11:02.282 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.282 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:02.282 ************************************ 00:11:02.282 END TEST nvmf_zcopy 00:11:02.282 ************************************ 00:11:02.282 10:26:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:02.282 10:26:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.282 10:26:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.282 10:26:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.541 ************************************ 00:11:02.541 START TEST nvmf_nmic 00:11:02.541 ************************************ 00:11:02.541 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:02.541 * Looking for test storage... 00:11:02.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.541 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:09.113 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:09.113 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:09.113 Found net devices under 0000:af:00.0: cvl_0_0 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:09.113 Found net devices under 0000:af:00.1: cvl_0_1 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:09.113 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:09.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:11:09.114 00:11:09.114 --- 10.0.0.2 ping statistics --- 00:11:09.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.114 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:11:09.114 00:11:09.114 --- 10.0.0.1 ping statistics --- 00:11:09.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.114 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3785666 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3785666 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3785666 ']' 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:09.114 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.114 [2024-07-25 10:26:12.539569] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:11:09.114 [2024-07-25 10:26:12.539615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.114 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.114 [2024-07-25 10:26:12.611771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.114 [2024-07-25 10:26:12.686731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.114 [2024-07-25 10:26:12.686769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.114 [2024-07-25 10:26:12.686778] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.114 [2024-07-25 10:26:12.686786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.114 [2024-07-25 10:26:12.686809] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.114 [2024-07-25 10:26:12.686855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.114 [2024-07-25 10:26:12.686949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.114 [2024-07-25 10:26:12.687038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.114 [2024-07-25 10:26:12.687040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.683 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.683 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:09.683 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.683 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.683 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.942 [2024-07-25 10:26:13.402069] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.942 Malloc0 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.942 [2024-07-25 10:26:13.456952] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:09.942 test case1: single bdev can't be used in multiple subsystems 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.942 [2024-07-25 10:26:13.480833] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:09.942 [2024-07-25 10:26:13.480858] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:09.942 [2024-07-25 10:26:13.480868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.942 request: 00:11:09.942 { 00:11:09.942 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:09.942 "namespace": { 00:11:09.942 "bdev_name": "Malloc0", 00:11:09.942 "no_auto_visible": false 00:11:09.942 }, 00:11:09.942 "method": "nvmf_subsystem_add_ns", 00:11:09.942 "req_id": 1 00:11:09.942 } 00:11:09.942 Got JSON-RPC error response 00:11:09.942 response: 00:11:09.942 { 00:11:09.942 "code": -32602, 00:11:09.942 "message": "Invalid parameters" 00:11:09.942 } 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:09.942 Adding namespace failed - expected result. 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:09.942 test case2: host connect to nvmf target in multiple paths 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.942 [2024-07-25 10:26:13.497001] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.942 10:26:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.322 10:26:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:12.701 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:12.701 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:12.701 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.701 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:12.702 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:14.610 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:14.610 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:14.610 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:14.610 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:14.610 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:14.610 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:14.610 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:14.610 [global] 00:11:14.610 thread=1 00:11:14.610 invalidate=1 00:11:14.610 rw=write 00:11:14.610 time_based=1 00:11:14.610 runtime=1 00:11:14.610 ioengine=libaio 00:11:14.610 direct=1 00:11:14.610 bs=4096 00:11:14.610 iodepth=1 00:11:14.610 norandommap=0 00:11:14.610 numjobs=1 00:11:14.610 00:11:14.610 verify_dump=1 00:11:14.610 verify_backlog=512 00:11:14.610 verify_state_save=0 00:11:14.610 do_verify=1 00:11:14.610 verify=crc32c-intel 00:11:14.610 [job0] 00:11:14.610 filename=/dev/nvme0n1 00:11:14.875 Could not set queue depth (nvme0n1) 00:11:15.132 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.132 fio-3.35 00:11:15.132 Starting 1 thread 00:11:16.061 00:11:16.061 job0: (groupid=0, jobs=1): err= 0: pid=3786901: Thu Jul 25 10:26:19 2024 00:11:16.061 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:11:16.061 slat (nsec): min=11652, max=26201, avg=24777.82, stdev=3002.32 00:11:16.061 clat (usec): min=40843, max=42015, avg=41418.94, stdev=508.01 00:11:16.061 lat (usec): min=40869, max=42041, avg=41443.72, stdev=508.64 00:11:16.061 clat percentiles (usec): 00:11:16.061 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:16.061 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:11:16.061 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:16.061 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:16.061 | 99.99th=[42206] 00:11:16.061 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:11:16.061 slat (nsec): min=11364, max=88531, avg=12428.63, stdev=3753.48 00:11:16.061 clat (usec): min=176, max=442, avg=207.86, stdev=22.41 00:11:16.061 lat (usec): min=198, max=530, avg=220.29, stdev=24.06 00:11:16.061 clat percentiles (usec): 00:11:16.061 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 194], 00:11:16.061 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:11:16.061 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 243], 95.00th=[ 253], 00:11:16.061 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 441], 99.95th=[ 441], 00:11:16.061 | 99.99th=[ 441] 00:11:16.062 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:16.062 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:16.062 lat (usec) : 250=88.58%, 500=7.30% 00:11:16.062 lat (msec) : 50=4.12% 00:11:16.062 cpu : usr=0.29%, sys=0.59%, ctx=534, majf=0, minf=2 00:11:16.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.062 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.062 00:11:16.062 Run status group 0 (all jobs): 00:11:16.062 READ: bw=85.8KiB/s (87.8kB/s), 85.8KiB/s-85.8KiB/s (87.8kB/s-87.8kB/s), io=88.0KiB (90.1kB), run=1026-1026msec 00:11:16.062 WRITE: bw=1996KiB/s (2044kB/s), 1996KiB/s-1996KiB/s (2044kB/s-2044kB/s), io=2048KiB (2097kB), run=1026-1026msec 00:11:16.062 00:11:16.062 Disk stats (read/write): 00:11:16.062 nvme0n1: ios=68/512, merge=0/0, ticks=1003/104, in_queue=1107, util=96.19% 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:16.318 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:16.318 rmmod nvme_tcp 00:11:16.318 rmmod nvme_fabrics 00:11:16.318 rmmod nvme_keyring 00:11:16.576 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:16.576 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:16.576 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:16.576 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3785666 ']' 00:11:16.576 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3785666 00:11:16.576 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3785666 ']' 00:11:16.576 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3785666 00:11:16.576 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:16.576 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:16.576 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3785666 00:11:16.576 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:16.576 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:16.576 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3785666' 00:11:16.576 killing process with pid 3785666 00:11:16.576 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3785666 00:11:16.576 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3785666 00:11:16.833 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:16.833 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:16.833 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:16.833 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:16.833 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:16.833 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.833 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.833 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.734 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:18.734 00:11:18.734 real 0m16.371s 00:11:18.734 user 0m39.684s 00:11:18.734 sys 0m5.864s 00:11:18.734 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.734 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:18.734 ************************************ 00:11:18.734 END TEST nvmf_nmic 00:11:18.734 ************************************ 00:11:18.734 10:26:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:18.734 10:26:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:18.734 10:26:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.734 10:26:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:18.993 ************************************ 00:11:18.993 START TEST nvmf_fio_target 00:11:18.993 ************************************ 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:18.993 * Looking for test storage... 00:11:18.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.993 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:11:18.994 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.552 10:26:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.552 10:26:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:11:25.552 10:26:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:25.552 10:26:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:25.552 10:26:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:25.552 10:26:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:25.552 10:26:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:25.552 10:26:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:11:25.552 10:26:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:25.552 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:25.552 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.552 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:25.553 Found net devices under 0000:af:00.0: cvl_0_0 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:25.553 Found net devices under 0000:af:00.1: cvl_0_1 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:25.553 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:25.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:11:25.811 00:11:25.811 --- 10.0.0.2 ping statistics --- 00:11:25.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.811 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:11:25.811 00:11:25.811 --- 10.0.0.1 ping statistics --- 00:11:25.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.811 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3790800 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3790800 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3790800 ']' 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:25.811 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.811 [2024-07-25 10:26:29.426089] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:11:25.811 [2024-07-25 10:26:29.426136] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.811 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.811 [2024-07-25 10:26:29.501637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.069 [2024-07-25 10:26:29.576508] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.069 [2024-07-25 10:26:29.576549] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.069 [2024-07-25 10:26:29.576559] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.069 [2024-07-25 10:26:29.576568] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.069 [2024-07-25 10:26:29.576575] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.069 [2024-07-25 10:26:29.576623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.069 [2024-07-25 10:26:29.576726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.069 [2024-07-25 10:26:29.576778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.069 [2024-07-25 10:26:29.576781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.634 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:26.634 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:26.634 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:26.634 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:26.634 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.634 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.634 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:26.891 [2024-07-25 10:26:30.419389] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.891 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:27.147 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:27.147 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:27.404 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:27.404 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:27.404 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:27.404 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:27.661 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:27.661 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:27.918 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:28.176 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:28.176 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:28.176 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:28.176 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:28.433 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:28.433 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:28.690 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:28.690 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:28.690 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:28.947 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:28.947 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.204 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.204 [2024-07-25 10:26:32.908041] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.463 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:29.463 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:29.731 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:31.118 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:31.118 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:31.118 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.118 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:31.118 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:31.118 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:33.009 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:33.009 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:33.009 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.009 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:33.009 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.009 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:33.009 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:33.009 [global] 00:11:33.009 thread=1 00:11:33.009 invalidate=1 00:11:33.009 rw=write 00:11:33.009 time_based=1 00:11:33.009 runtime=1 00:11:33.009 ioengine=libaio 00:11:33.009 direct=1 00:11:33.009 bs=4096 00:11:33.009 iodepth=1 00:11:33.009 norandommap=0 00:11:33.009 numjobs=1 00:11:33.009 00:11:33.009 verify_dump=1 00:11:33.009 verify_backlog=512 00:11:33.009 verify_state_save=0 00:11:33.009 do_verify=1 00:11:33.009 verify=crc32c-intel 00:11:33.009 [job0] 00:11:33.009 filename=/dev/nvme0n1 00:11:33.009 [job1] 00:11:33.009 filename=/dev/nvme0n2 00:11:33.009 [job2] 00:11:33.009 filename=/dev/nvme0n3 00:11:33.009 [job3] 00:11:33.009 filename=/dev/nvme0n4 00:11:33.284 Could not set queue depth (nvme0n1) 00:11:33.284 Could not set queue depth (nvme0n2) 00:11:33.284 Could not set queue depth (nvme0n3) 00:11:33.284 Could not set queue depth (nvme0n4) 00:11:33.543 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.543 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.543 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.543 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.543 fio-3.35 00:11:33.543 Starting 4 threads 00:11:34.927 00:11:34.927 job0: (groupid=0, jobs=1): err= 0: pid=3792240: Thu Jul 25 10:26:38 2024 00:11:34.927 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:34.927 slat (nsec): min=8865, max=30963, avg=9686.60, stdev=1387.58 00:11:34.927 clat (usec): min=312, max=536, avg=390.24, stdev=24.09 00:11:34.927 lat (usec): min=321, max=565, avg=399.93, stdev=24.27 00:11:34.927 clat percentiles (usec): 00:11:34.927 | 1.00th=[ 330], 5.00th=[ 363], 10.00th=[ 371], 20.00th=[ 375], 00:11:34.927 | 30.00th=[ 383], 40.00th=[ 388], 50.00th=[ 388], 60.00th=[ 392], 00:11:34.927 | 70.00th=[ 396], 80.00th=[ 400], 90.00th=[ 408], 95.00th=[ 416], 00:11:34.927 | 99.00th=[ 498], 99.50th=[ 502], 99.90th=[ 519], 99.95th=[ 537], 00:11:34.927 | 99.99th=[ 537] 00:11:34.927 write: IOPS=1581, BW=6326KiB/s (6477kB/s)(6332KiB/1001msec); 0 zone resets 00:11:34.927 slat (nsec): min=12205, max=45261, avg=13436.51, stdev=1988.19 00:11:34.927 clat (usec): min=187, max=367, avg=224.63, stdev=22.93 00:11:34.927 lat (usec): min=199, max=413, avg=238.07, stdev=23.25 00:11:34.927 clat percentiles (usec): 00:11:34.927 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:11:34.927 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 227], 00:11:34.927 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 258], 95.00th=[ 265], 00:11:34.927 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 326], 99.95th=[ 367], 00:11:34.927 | 99.99th=[ 367] 00:11:34.927 bw ( KiB/s): min= 8192, max= 8192, per=49.78%, avg=8192.00, stdev= 0.00, samples=1 00:11:34.927 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:34.927 lat (usec) : 250=41.74%, 500=57.90%, 750=0.35% 00:11:34.927 cpu : usr=3.20%, sys=5.20%, ctx=3120, majf=0, minf=1 00:11:34.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:34.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.927 issued rwts: total=1536,1583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:34.927 job1: (groupid=0, jobs=1): err= 0: pid=3792263: Thu Jul 25 10:26:38 2024 00:11:34.927 read: IOPS=1019, BW=4079KiB/s (4177kB/s)(4108KiB/1007msec) 00:11:34.927 slat (nsec): min=8540, max=35458, avg=9478.16, stdev=1495.91 00:11:34.927 clat (usec): min=392, max=41989, avg=613.13, stdev=2213.55 00:11:34.927 lat (usec): min=401, max=42000, avg=622.60, stdev=2213.62 00:11:34.927 clat percentiles (usec): 00:11:34.927 | 1.00th=[ 408], 5.00th=[ 441], 10.00th=[ 469], 20.00th=[ 482], 00:11:34.927 | 30.00th=[ 490], 40.00th=[ 490], 50.00th=[ 494], 60.00th=[ 498], 00:11:34.927 | 70.00th=[ 502], 80.00th=[ 506], 90.00th=[ 515], 95.00th=[ 519], 00:11:34.927 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[41157], 99.95th=[42206], 00:11:34.927 | 99.99th=[42206] 00:11:34.927 write: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec); 0 zone resets 00:11:34.927 slat (nsec): min=11259, max=39592, avg=12589.08, stdev=1746.35 00:11:34.927 clat (usec): min=186, max=477, avg=222.87, stdev=24.47 00:11:34.927 lat (usec): min=199, max=488, avg=235.46, stdev=24.68 00:11:34.927 clat percentiles (usec): 00:11:34.927 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 202], 00:11:34.927 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 225], 00:11:34.927 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 265], 00:11:34.927 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 404], 99.95th=[ 478], 00:11:34.927 | 99.99th=[ 478] 00:11:34.927 bw ( KiB/s): min= 4256, max= 8032, per=37.33%, avg=6144.00, stdev=2670.04, samples=2 00:11:34.927 iops : min= 1064, max= 2008, avg=1536.00, stdev=667.51, samples=2 00:11:34.927 lat (usec) : 250=50.84%, 500=34.96%, 750=14.05% 00:11:34.927 lat (msec) : 4=0.04%, 50=0.12% 00:11:34.927 cpu : usr=1.39%, sys=3.28%, ctx=2563, majf=0, minf=1 00:11:34.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:34.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.927 issued rwts: total=1027,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:34.927 job2: (groupid=0, jobs=1): err= 0: pid=3792282: Thu Jul 25 10:26:38 2024 00:11:34.927 read: IOPS=332, BW=1331KiB/s (1363kB/s)(1332KiB/1001msec) 00:11:34.927 slat (nsec): min=9146, max=22932, avg=10083.45, stdev=1412.64 00:11:34.927 clat (usec): min=490, max=41036, avg=2586.54, stdev=8834.36 00:11:34.927 lat (usec): min=500, max=41048, avg=2596.62, stdev=8834.78 00:11:34.927 clat percentiles (usec): 00:11:34.927 | 1.00th=[ 498], 5.00th=[ 506], 10.00th=[ 515], 20.00th=[ 523], 00:11:34.927 | 30.00th=[ 529], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 537], 00:11:34.927 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 603], 95.00th=[34341], 00:11:34.927 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:34.927 | 99.99th=[41157] 00:11:34.927 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:34.927 slat (nsec): min=12391, max=43631, avg=14552.06, stdev=2617.83 00:11:34.927 clat (usec): min=200, max=2023, avg=245.68, stdev=103.45 00:11:34.927 lat (usec): min=213, max=2036, avg=260.23, stdev=103.85 00:11:34.927 clat percentiles (usec): 00:11:34.927 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 223], 00:11:34.927 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 241], 00:11:34.927 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:11:34.927 | 99.00th=[ 388], 99.50th=[ 482], 99.90th=[ 2024], 99.95th=[ 2024], 00:11:34.927 | 99.99th=[ 2024] 00:11:34.927 bw ( KiB/s): min= 4096, max= 4096, per=24.89%, avg=4096.00, stdev= 0.00, samples=1 00:11:34.927 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:34.927 lat (usec) : 250=43.43%, 500=17.40%, 750=36.92% 00:11:34.927 lat (msec) : 2=0.12%, 4=0.12%, 50=2.01% 00:11:34.927 cpu : usr=0.40%, sys=1.90%, ctx=845, majf=0, minf=1 00:11:34.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:34.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.927 issued rwts: total=333,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:34.927 job3: (groupid=0, jobs=1): err= 0: pid=3792290: Thu Jul 25 10:26:38 2024 00:11:34.927 read: IOPS=427, BW=1710KiB/s (1751kB/s)(1712KiB/1001msec) 00:11:34.927 slat (nsec): min=9175, max=40346, avg=12040.35, stdev=5204.22 00:11:34.927 clat (usec): min=367, max=42122, avg=2004.97, stdev=7744.90 00:11:34.927 lat (usec): min=377, max=42148, avg=2017.01, stdev=7747.28 00:11:34.927 clat percentiles (usec): 00:11:34.927 | 1.00th=[ 375], 5.00th=[ 388], 10.00th=[ 392], 20.00th=[ 408], 00:11:34.927 | 30.00th=[ 429], 40.00th=[ 457], 50.00th=[ 494], 60.00th=[ 510], 00:11:34.927 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 635], 95.00th=[ 660], 00:11:34.927 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:34.927 | 99.99th=[42206] 00:11:34.927 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:34.927 slat (nsec): min=12113, max=39060, avg=13387.16, stdev=2104.30 00:11:34.927 clat (usec): min=196, max=445, avg=248.11, stdev=32.08 00:11:34.927 lat (usec): min=209, max=458, avg=261.50, stdev=32.28 00:11:34.927 clat percentiles (usec): 00:11:34.927 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 225], 00:11:34.927 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 251], 00:11:34.927 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 314], 00:11:34.927 | 99.00th=[ 359], 99.50th=[ 416], 99.90th=[ 445], 99.95th=[ 445], 00:11:34.927 | 99.99th=[ 445] 00:11:34.927 bw ( KiB/s): min= 4096, max= 4096, per=24.89%, avg=4096.00, stdev= 0.00, samples=1 00:11:34.927 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:34.927 lat (usec) : 250=32.98%, 500=45.96%, 750=19.36% 00:11:34.927 lat (msec) : 50=1.70% 00:11:34.927 cpu : usr=0.80%, sys=1.10%, ctx=942, majf=0, minf=2 00:11:34.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:34.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.927 issued rwts: total=428,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:34.927 00:11:34.927 Run status group 0 (all jobs): 00:11:34.927 READ: bw=12.9MiB/s (13.5MB/s), 1331KiB/s-6138KiB/s (1363kB/s-6285kB/s), io=13.0MiB (13.6MB), run=1001-1007msec 00:11:34.927 WRITE: bw=16.1MiB/s (16.9MB/s), 2046KiB/s-6326KiB/s (2095kB/s-6477kB/s), io=16.2MiB (17.0MB), run=1001-1007msec 00:11:34.927 00:11:34.927 Disk stats (read/write): 00:11:34.927 nvme0n1: ios=1171/1536, merge=0/0, ticks=1445/319, in_queue=1764, util=98.80% 00:11:34.927 nvme0n2: ios=1024/1251, merge=0/0, ticks=497/277, in_queue=774, util=84.33% 00:11:34.927 nvme0n3: ios=16/512, merge=0/0, ticks=656/117, in_queue=773, util=87.96% 00:11:34.927 nvme0n4: ios=90/512, merge=0/0, ticks=1623/118, in_queue=1741, util=99.24% 00:11:34.927 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:34.927 [global] 00:11:34.927 thread=1 00:11:34.928 invalidate=1 00:11:34.928 rw=randwrite 00:11:34.928 time_based=1 00:11:34.928 runtime=1 00:11:34.928 ioengine=libaio 00:11:34.928 direct=1 00:11:34.928 bs=4096 00:11:34.928 iodepth=1 00:11:34.928 norandommap=0 00:11:34.928 numjobs=1 00:11:34.928 00:11:34.928 verify_dump=1 00:11:34.928 verify_backlog=512 00:11:34.928 verify_state_save=0 00:11:34.928 do_verify=1 00:11:34.928 verify=crc32c-intel 00:11:34.928 [job0] 00:11:34.928 filename=/dev/nvme0n1 00:11:34.928 [job1] 00:11:34.928 filename=/dev/nvme0n2 00:11:34.928 [job2] 00:11:34.928 filename=/dev/nvme0n3 00:11:34.928 [job3] 00:11:34.928 filename=/dev/nvme0n4 00:11:34.928 Could not set queue depth (nvme0n1) 00:11:34.928 Could not set queue depth (nvme0n2) 00:11:34.928 Could not set queue depth (nvme0n3) 00:11:34.928 Could not set queue depth (nvme0n4) 00:11:35.186 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:35.186 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:35.186 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:35.186 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:35.186 fio-3.35 00:11:35.186 Starting 4 threads 00:11:36.598 00:11:36.598 job0: (groupid=0, jobs=1): err= 0: pid=3792699: Thu Jul 25 10:26:39 2024 00:11:36.598 read: IOPS=1369, BW=5479KiB/s (5610kB/s)(5484KiB/1001msec) 00:11:36.598 slat (nsec): min=8549, max=36512, avg=10053.63, stdev=2002.88 00:11:36.598 clat (usec): min=250, max=672, avg=433.65, stdev=68.25 00:11:36.598 lat (usec): min=259, max=681, avg=443.71, stdev=68.63 00:11:36.598 clat percentiles (usec): 00:11:36.598 | 1.00th=[ 285], 5.00th=[ 347], 10.00th=[ 359], 20.00th=[ 375], 00:11:36.598 | 30.00th=[ 388], 40.00th=[ 408], 50.00th=[ 429], 60.00th=[ 445], 00:11:36.598 | 70.00th=[ 474], 80.00th=[ 498], 90.00th=[ 519], 95.00th=[ 545], 00:11:36.598 | 99.00th=[ 627], 99.50th=[ 652], 99.90th=[ 668], 99.95th=[ 676], 00:11:36.598 | 99.99th=[ 676] 00:11:36.598 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:36.598 slat (nsec): min=11203, max=43217, avg=14027.64, stdev=2050.58 00:11:36.599 clat (usec): min=147, max=3252, avg=235.19, stdev=85.70 00:11:36.599 lat (usec): min=160, max=3266, avg=249.22, stdev=86.06 00:11:36.599 clat percentiles (usec): 00:11:36.599 | 1.00th=[ 157], 5.00th=[ 172], 10.00th=[ 186], 20.00th=[ 200], 00:11:36.599 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 241], 00:11:36.599 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 297], 00:11:36.599 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 396], 99.95th=[ 3261], 00:11:36.599 | 99.99th=[ 3261] 00:11:36.599 bw ( KiB/s): min= 7448, max= 7448, per=35.84%, avg=7448.00, stdev= 0.00, samples=1 00:11:36.599 iops : min= 1862, max= 1862, avg=1862.00, stdev= 0.00, samples=1 00:11:36.599 lat (usec) : 250=36.09%, 500=55.11%, 750=8.77% 00:11:36.599 lat (msec) : 4=0.03% 00:11:36.599 cpu : usr=2.30%, sys=3.70%, ctx=2908, majf=0, minf=1 00:11:36.599 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.599 issued rwts: total=1371,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.599 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.599 job1: (groupid=0, jobs=1): err= 0: pid=3792714: Thu Jul 25 10:26:39 2024 00:11:36.599 read: IOPS=753, BW=3015KiB/s (3087kB/s)(3048KiB/1011msec) 00:11:36.599 slat (nsec): min=8797, max=28400, avg=9891.11, stdev=1869.75 00:11:36.599 clat (usec): min=357, max=41986, avg=915.42, stdev=3896.46 00:11:36.599 lat (usec): min=367, max=42014, avg=925.31, stdev=3897.94 00:11:36.599 clat percentiles (usec): 00:11:36.599 | 1.00th=[ 445], 5.00th=[ 482], 10.00th=[ 486], 20.00th=[ 494], 00:11:36.599 | 30.00th=[ 502], 40.00th=[ 510], 50.00th=[ 529], 60.00th=[ 545], 00:11:36.599 | 70.00th=[ 562], 80.00th=[ 586], 90.00th=[ 619], 95.00th=[ 660], 00:11:36.599 | 99.00th=[ 930], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:36.599 | 99.99th=[42206] 00:11:36.599 write: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec); 0 zone resets 00:11:36.599 slat (nsec): min=9153, max=41379, avg=12954.97, stdev=2000.96 00:11:36.599 clat (usec): min=223, max=448, avg=281.19, stdev=33.36 00:11:36.599 lat (usec): min=236, max=489, avg=294.14, stdev=33.53 00:11:36.599 clat percentiles (usec): 00:11:36.599 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 253], 00:11:36.599 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:11:36.599 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 347], 00:11:36.599 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 408], 99.95th=[ 449], 00:11:36.599 | 99.99th=[ 449] 00:11:36.599 bw ( KiB/s): min= 2216, max= 5976, per=19.71%, avg=4096.00, stdev=2658.72, samples=2 00:11:36.599 iops : min= 554, max= 1494, avg=1024.00, stdev=664.68, samples=2 00:11:36.599 lat (usec) : 250=10.13%, 500=58.01%, 750=31.35%, 1000=0.11% 00:11:36.599 lat (msec) : 50=0.39% 00:11:36.599 cpu : usr=0.89%, sys=2.87%, ctx=1789, majf=0, minf=2 00:11:36.599 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.599 issued rwts: total=762,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.599 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.599 job2: (groupid=0, jobs=1): err= 0: pid=3792736: Thu Jul 25 10:26:39 2024 00:11:36.599 read: IOPS=997, BW=3988KiB/s (4084kB/s)(4116KiB/1032msec) 00:11:36.599 slat (nsec): min=7175, max=43582, avg=11419.38, stdev=2412.83 00:11:36.599 clat (usec): min=307, max=41084, avg=589.94, stdev=2823.44 00:11:36.599 lat (usec): min=315, max=41113, avg=601.36, stdev=2824.37 00:11:36.599 clat percentiles (usec): 00:11:36.599 | 1.00th=[ 318], 5.00th=[ 338], 10.00th=[ 371], 20.00th=[ 383], 00:11:36.599 | 30.00th=[ 388], 40.00th=[ 392], 50.00th=[ 396], 60.00th=[ 400], 00:11:36.599 | 70.00th=[ 404], 80.00th=[ 408], 90.00th=[ 416], 95.00th=[ 420], 00:11:36.599 | 99.00th=[ 482], 99.50th=[ 537], 99.90th=[41157], 99.95th=[41157], 00:11:36.599 | 99.99th=[41157] 00:11:36.599 write: IOPS=1488, BW=5953KiB/s (6096kB/s)(6144KiB/1032msec); 0 zone resets 00:11:36.599 slat (usec): min=8, max=261, avg=14.80, stdev=13.88 00:11:36.599 clat (usec): min=37, max=720, avg=249.06, stdev=47.92 00:11:36.599 lat (usec): min=203, max=851, avg=263.86, stdev=52.22 00:11:36.599 clat percentiles (usec): 00:11:36.599 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:11:36.599 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 245], 00:11:36.599 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 314], 95.00th=[ 359], 00:11:36.599 | 99.00th=[ 392], 99.50th=[ 437], 99.90th=[ 611], 99.95th=[ 717], 00:11:36.599 | 99.99th=[ 717] 00:11:36.599 bw ( KiB/s): min= 4096, max= 8192, per=29.56%, avg=6144.00, stdev=2896.31, samples=2 00:11:36.599 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:11:36.599 lat (usec) : 50=0.04%, 250=37.82%, 500=61.75%, 750=0.19% 00:11:36.599 lat (msec) : 50=0.19% 00:11:36.599 cpu : usr=1.94%, sys=3.59%, ctx=2569, majf=0, minf=1 00:11:36.599 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.599 issued rwts: total=1029,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.599 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.599 job3: (groupid=0, jobs=1): err= 0: pid=3792744: Thu Jul 25 10:26:39 2024 00:11:36.599 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:36.599 slat (nsec): min=7559, max=50483, avg=11243.27, stdev=2403.63 00:11:36.599 clat (usec): min=390, max=805, avg=572.45, stdev=68.20 00:11:36.599 lat (usec): min=398, max=817, avg=583.70, stdev=68.46 00:11:36.599 clat percentiles (usec): 00:11:36.599 | 1.00th=[ 433], 5.00th=[ 478], 10.00th=[ 494], 20.00th=[ 519], 00:11:36.599 | 30.00th=[ 529], 40.00th=[ 545], 50.00th=[ 562], 60.00th=[ 578], 00:11:36.599 | 70.00th=[ 611], 80.00th=[ 635], 90.00th=[ 668], 95.00th=[ 701], 00:11:36.599 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 791], 99.95th=[ 807], 00:11:36.599 | 99.99th=[ 807] 00:11:36.599 write: IOPS=1264, BW=5059KiB/s (5180kB/s)(5064KiB/1001msec); 0 zone resets 00:11:36.599 slat (nsec): min=11764, max=70945, avg=14094.03, stdev=2781.01 00:11:36.599 clat (usec): min=208, max=721, avg=297.95, stdev=55.23 00:11:36.599 lat (usec): min=221, max=737, avg=312.04, stdev=55.60 00:11:36.599 clat percentiles (usec): 00:11:36.599 | 1.00th=[ 221], 5.00th=[ 237], 10.00th=[ 245], 20.00th=[ 258], 00:11:36.599 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 293], 00:11:36.599 | 70.00th=[ 314], 80.00th=[ 343], 90.00th=[ 383], 95.00th=[ 408], 00:11:36.599 | 99.00th=[ 457], 99.50th=[ 478], 99.90th=[ 529], 99.95th=[ 725], 00:11:36.599 | 99.99th=[ 725] 00:11:36.599 bw ( KiB/s): min= 4448, max= 4448, per=21.40%, avg=4448.00, stdev= 0.00, samples=1 00:11:36.599 iops : min= 1112, max= 1112, avg=1112.00, stdev= 0.00, samples=1 00:11:36.599 lat (usec) : 250=7.47%, 500=52.79%, 750=39.48%, 1000=0.26% 00:11:36.599 cpu : usr=1.60%, sys=5.00%, ctx=2293, majf=0, minf=1 00:11:36.599 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.599 issued rwts: total=1024,1266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.599 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.599 00:11:36.599 Run status group 0 (all jobs): 00:11:36.599 READ: bw=15.8MiB/s (16.6MB/s), 3015KiB/s-5479KiB/s (3087kB/s-5610kB/s), io=16.4MiB (17.1MB), run=1001-1032msec 00:11:36.599 WRITE: bw=20.3MiB/s (21.3MB/s), 4051KiB/s-6138KiB/s (4149kB/s-6285kB/s), io=20.9MiB (22.0MB), run=1001-1032msec 00:11:36.599 00:11:36.599 Disk stats (read/write): 00:11:36.599 nvme0n1: ios=1051/1322, merge=0/0, ticks=1404/304, in_queue=1708, util=99.50% 00:11:36.599 nvme0n2: ios=781/1024, merge=0/0, ticks=1474/278, in_queue=1752, util=99.80% 00:11:36.599 nvme0n3: ios=1063/1524, merge=0/0, ticks=1255/369, in_queue=1624, util=96.37% 00:11:36.599 nvme0n4: ios=817/1024, merge=0/0, ticks=470/310, in_queue=780, util=89.38% 00:11:36.599 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:36.599 [global] 00:11:36.599 thread=1 00:11:36.599 invalidate=1 00:11:36.599 rw=write 00:11:36.599 time_based=1 00:11:36.599 runtime=1 00:11:36.599 ioengine=libaio 00:11:36.599 direct=1 00:11:36.599 bs=4096 00:11:36.599 iodepth=128 00:11:36.599 norandommap=0 00:11:36.599 numjobs=1 00:11:36.599 00:11:36.599 verify_dump=1 00:11:36.599 verify_backlog=512 00:11:36.599 verify_state_save=0 00:11:36.599 do_verify=1 00:11:36.599 verify=crc32c-intel 00:11:36.599 [job0] 00:11:36.599 filename=/dev/nvme0n1 00:11:36.599 [job1] 00:11:36.599 filename=/dev/nvme0n2 00:11:36.599 [job2] 00:11:36.599 filename=/dev/nvme0n3 00:11:36.599 [job3] 00:11:36.599 filename=/dev/nvme0n4 00:11:36.599 Could not set queue depth (nvme0n1) 00:11:36.599 Could not set queue depth (nvme0n2) 00:11:36.599 Could not set queue depth (nvme0n3) 00:11:36.599 Could not set queue depth (nvme0n4) 00:11:36.859 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:36.859 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:36.859 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:36.859 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:36.859 fio-3.35 00:11:36.859 Starting 4 threads 00:11:38.252 00:11:38.252 job0: (groupid=0, jobs=1): err= 0: pid=3793147: Thu Jul 25 10:26:41 2024 00:11:38.252 read: IOPS=4305, BW=16.8MiB/s (17.6MB/s)(17.0MiB/1008msec) 00:11:38.252 slat (nsec): min=1701, max=19146k, avg=110891.67, stdev=875021.54 00:11:38.252 clat (usec): min=2774, max=60842, avg=15244.11, stdev=7194.01 00:11:38.252 lat (usec): min=3521, max=60847, avg=15355.00, stdev=7254.44 00:11:38.252 clat percentiles (usec): 00:11:38.252 | 1.00th=[ 4883], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[ 9765], 00:11:38.252 | 30.00th=[10814], 40.00th=[12780], 50.00th=[13566], 60.00th=[15926], 00:11:38.252 | 70.00th=[18220], 80.00th=[19530], 90.00th=[22938], 95.00th=[25297], 00:11:38.252 | 99.00th=[54789], 99.50th=[57410], 99.90th=[61080], 99.95th=[61080], 00:11:38.252 | 99.99th=[61080] 00:11:38.252 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:11:38.252 slat (usec): min=2, max=11120, avg=92.87, stdev=520.58 00:11:38.252 clat (usec): min=1403, max=60829, avg=13407.02, stdev=6380.16 00:11:38.252 lat (usec): min=1419, max=60834, avg=13499.90, stdev=6408.48 00:11:38.252 clat percentiles (usec): 00:11:38.252 | 1.00th=[ 4015], 5.00th=[ 5800], 10.00th=[ 7046], 20.00th=[ 8356], 00:11:38.252 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[11863], 60.00th=[13829], 00:11:38.252 | 70.00th=[18220], 80.00th=[19268], 90.00th=[19530], 95.00th=[20317], 00:11:38.252 | 99.00th=[41157], 99.50th=[50594], 99.90th=[52167], 99.95th=[52167], 00:11:38.252 | 99.99th=[61080] 00:11:38.252 bw ( KiB/s): min=15632, max=21232, per=26.36%, avg=18432.00, stdev=3959.80, samples=2 00:11:38.252 iops : min= 3908, max= 5308, avg=4608.00, stdev=989.95, samples=2 00:11:38.252 lat (msec) : 2=0.03%, 4=0.66%, 10=27.19%, 20=60.84%, 50=10.48% 00:11:38.252 lat (msec) : 100=0.79% 00:11:38.252 cpu : usr=4.67%, sys=6.75%, ctx=410, majf=0, minf=1 00:11:38.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:38.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:38.252 issued rwts: total=4340,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:38.252 job1: (groupid=0, jobs=1): err= 0: pid=3793162: Thu Jul 25 10:26:41 2024 00:11:38.252 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:11:38.252 slat (usec): min=2, max=28971, avg=169.23, stdev=1226.04 00:11:38.252 clat (usec): min=6996, max=55870, avg=20360.42, stdev=9215.17 00:11:38.252 lat (usec): min=7010, max=55900, avg=20529.64, stdev=9306.41 00:11:38.252 clat percentiles (usec): 00:11:38.252 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10814], 20.00th=[11207], 00:11:38.252 | 30.00th=[14091], 40.00th=[15664], 50.00th=[19792], 60.00th=[21365], 00:11:38.252 | 70.00th=[22676], 80.00th=[26346], 90.00th=[35390], 95.00th=[39060], 00:11:38.252 | 99.00th=[48497], 99.50th=[49546], 99.90th=[51643], 99.95th=[55313], 00:11:38.252 | 99.99th=[55837] 00:11:38.252 write: IOPS=3199, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1012msec); 0 zone resets 00:11:38.252 slat (usec): min=3, max=15636, avg=140.76, stdev=806.07 00:11:38.252 clat (usec): min=4551, max=62833, avg=20117.39, stdev=10298.05 00:11:38.252 lat (usec): min=4564, max=62846, avg=20258.15, stdev=10369.58 00:11:38.252 clat percentiles (usec): 00:11:38.252 | 1.00th=[ 5407], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10552], 00:11:38.252 | 30.00th=[13960], 40.00th=[17957], 50.00th=[19268], 60.00th=[19530], 00:11:38.252 | 70.00th=[19792], 80.00th=[26608], 90.00th=[35914], 95.00th=[39584], 00:11:38.252 | 99.00th=[58459], 99.50th=[60556], 99.90th=[62653], 99.95th=[62653], 00:11:38.252 | 99.99th=[62653] 00:11:38.252 bw ( KiB/s): min=12008, max=12872, per=17.79%, avg=12440.00, stdev=610.94, samples=2 00:11:38.252 iops : min= 3002, max= 3218, avg=3110.00, stdev=152.74, samples=2 00:11:38.252 lat (msec) : 10=5.55%, 20=56.47%, 50=36.94%, 100=1.05% 00:11:38.252 cpu : usr=3.96%, sys=4.65%, ctx=351, majf=0, minf=1 00:11:38.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:38.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:38.252 issued rwts: total=3072,3238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:38.252 job2: (groupid=0, jobs=1): err= 0: pid=3793184: Thu Jul 25 10:26:41 2024 00:11:38.252 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:11:38.252 slat (nsec): min=1750, max=19302k, avg=102303.15, stdev=737482.38 00:11:38.252 clat (usec): min=7081, max=52898, avg=15306.23, stdev=6566.48 00:11:38.252 lat (usec): min=7107, max=52923, avg=15408.54, stdev=6610.47 00:11:38.252 clat percentiles (usec): 00:11:38.252 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[11076], 20.00th=[11863], 00:11:38.252 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12780], 60.00th=[13042], 00:11:38.252 | 70.00th=[14091], 80.00th=[16450], 90.00th=[25822], 95.00th=[31327], 00:11:38.252 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39060], 99.95th=[39584], 00:11:38.252 | 99.99th=[52691] 00:11:38.252 write: IOPS=4707, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1004msec); 0 zone resets 00:11:38.252 slat (usec): min=2, max=9243, avg=86.32, stdev=486.40 00:11:38.252 clat (usec): min=396, max=39908, avg=11957.06, stdev=3601.94 00:11:38.252 lat (usec): min=1665, max=39911, avg=12043.38, stdev=3622.24 00:11:38.252 clat percentiles (usec): 00:11:38.252 | 1.00th=[ 3032], 5.00th=[ 6849], 10.00th=[ 8455], 20.00th=[10552], 00:11:38.252 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:11:38.252 | 70.00th=[12256], 80.00th=[12911], 90.00th=[16319], 95.00th=[18482], 00:11:38.252 | 99.00th=[23725], 99.50th=[35390], 99.90th=[40109], 99.95th=[40109], 00:11:38.252 | 99.99th=[40109] 00:11:38.252 bw ( KiB/s): min=16384, max=20480, per=26.36%, avg=18432.00, stdev=2896.31, samples=2 00:11:38.252 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:11:38.252 lat (usec) : 500=0.01% 00:11:38.252 lat (msec) : 2=0.14%, 4=0.42%, 10=10.11%, 20=80.39%, 50=8.91% 00:11:38.252 lat (msec) : 100=0.01% 00:11:38.252 cpu : usr=4.29%, sys=6.48%, ctx=436, majf=0, minf=1 00:11:38.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:38.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:38.252 issued rwts: total=4608,4726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:38.252 job3: (groupid=0, jobs=1): err= 0: pid=3793190: Thu Jul 25 10:26:41 2024 00:11:38.252 read: IOPS=5029, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1004msec) 00:11:38.252 slat (nsec): min=1729, max=13147k, avg=96078.69, stdev=687436.05 00:11:38.252 clat (usec): min=3553, max=38306, avg=13123.88, stdev=4610.80 00:11:38.252 lat (usec): min=3561, max=38330, avg=13219.96, stdev=4655.12 00:11:38.252 clat percentiles (usec): 00:11:38.252 | 1.00th=[ 7111], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[10421], 00:11:38.252 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11994], 00:11:38.252 | 70.00th=[13435], 80.00th=[15401], 90.00th=[17695], 95.00th=[21890], 00:11:38.252 | 99.00th=[33817], 99.50th=[33817], 99.90th=[37487], 99.95th=[38011], 00:11:38.252 | 99.99th=[38536] 00:11:38.252 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:11:38.252 slat (usec): min=2, max=17025, avg=89.26, stdev=585.58 00:11:38.252 clat (usec): min=3080, max=39505, avg=11735.58, stdev=5578.86 00:11:38.252 lat (usec): min=3762, max=39563, avg=11824.85, stdev=5611.35 00:11:38.252 clat percentiles (usec): 00:11:38.252 | 1.00th=[ 4490], 5.00th=[ 6194], 10.00th=[ 6849], 20.00th=[ 8586], 00:11:38.252 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11076], 00:11:38.252 | 70.00th=[11469], 80.00th=[12780], 90.00th=[19530], 95.00th=[26346], 00:11:38.252 | 99.00th=[32900], 99.50th=[34341], 99.90th=[36963], 99.95th=[36963], 00:11:38.252 | 99.99th=[39584] 00:11:38.252 bw ( KiB/s): min=17976, max=22984, per=29.29%, avg=20480.00, stdev=3541.19, samples=2 00:11:38.252 iops : min= 4494, max= 5746, avg=5120.00, stdev=885.30, samples=2 00:11:38.252 lat (msec) : 4=0.36%, 10=27.34%, 20=65.14%, 50=7.16% 00:11:38.252 cpu : usr=6.08%, sys=7.88%, ctx=430, majf=0, minf=1 00:11:38.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:38.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:38.252 issued rwts: total=5050,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:38.252 00:11:38.252 Run status group 0 (all jobs): 00:11:38.252 READ: bw=65.9MiB/s (69.1MB/s), 11.9MiB/s-19.6MiB/s (12.4MB/s-20.6MB/s), io=66.7MiB (69.9MB), run=1004-1012msec 00:11:38.252 WRITE: bw=68.3MiB/s (71.6MB/s), 12.5MiB/s-19.9MiB/s (13.1MB/s-20.9MB/s), io=69.1MiB (72.5MB), run=1004-1012msec 00:11:38.252 00:11:38.252 Disk stats (read/write): 00:11:38.253 nvme0n1: ios=3634/3929, merge=0/0, ticks=52932/48056, in_queue=100988, util=84.57% 00:11:38.253 nvme0n2: ios=2572/2560, merge=0/0, ticks=41094/26532, in_queue=67626, util=85.14% 00:11:38.253 nvme0n3: ios=3584/3824, merge=0/0, ticks=33067/21366, in_queue=54433, util=87.94% 00:11:38.253 nvme0n4: ios=3952/4096, merge=0/0, ticks=43937/39923, in_queue=83860, util=89.27% 00:11:38.253 10:26:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:38.253 [global] 00:11:38.253 thread=1 00:11:38.253 invalidate=1 00:11:38.253 rw=randwrite 00:11:38.253 time_based=1 00:11:38.253 runtime=1 00:11:38.253 ioengine=libaio 00:11:38.253 direct=1 00:11:38.253 bs=4096 00:11:38.253 iodepth=128 00:11:38.253 norandommap=0 00:11:38.253 numjobs=1 00:11:38.253 00:11:38.253 verify_dump=1 00:11:38.253 verify_backlog=512 00:11:38.253 verify_state_save=0 00:11:38.253 do_verify=1 00:11:38.253 verify=crc32c-intel 00:11:38.253 [job0] 00:11:38.253 filename=/dev/nvme0n1 00:11:38.253 [job1] 00:11:38.253 filename=/dev/nvme0n2 00:11:38.253 [job2] 00:11:38.253 filename=/dev/nvme0n3 00:11:38.253 [job3] 00:11:38.253 filename=/dev/nvme0n4 00:11:38.253 Could not set queue depth (nvme0n1) 00:11:38.253 Could not set queue depth (nvme0n2) 00:11:38.253 Could not set queue depth (nvme0n3) 00:11:38.253 Could not set queue depth (nvme0n4) 00:11:38.510 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:38.510 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:38.510 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:38.510 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:38.510 fio-3.35 00:11:38.510 Starting 4 threads 00:11:39.889 00:11:39.889 job0: (groupid=0, jobs=1): err= 0: pid=3793585: Thu Jul 25 10:26:43 2024 00:11:39.889 read: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec) 00:11:39.889 slat (usec): min=2, max=36925, avg=126.35, stdev=1244.79 00:11:39.889 clat (usec): min=5669, max=76349, avg=16901.52, stdev=9949.71 00:11:39.889 lat (usec): min=5723, max=76359, avg=17027.87, stdev=10060.21 00:11:39.889 clat percentiles (usec): 00:11:39.889 | 1.00th=[ 7177], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[10945], 00:11:39.889 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13698], 60.00th=[14615], 00:11:39.889 | 70.00th=[16450], 80.00th=[20055], 90.00th=[28705], 95.00th=[41681], 00:11:39.889 | 99.00th=[56361], 99.50th=[60556], 99.90th=[60556], 99.95th=[70779], 00:11:39.889 | 99.99th=[76022] 00:11:39.889 write: IOPS=4234, BW=16.5MiB/s (17.3MB/s)(16.8MiB/1014msec); 0 zone resets 00:11:39.889 slat (usec): min=2, max=15018, avg=92.45, stdev=700.64 00:11:39.889 clat (usec): min=546, max=70539, avg=13854.85, stdev=10095.01 00:11:39.889 lat (usec): min=571, max=70550, avg=13947.30, stdev=10131.53 00:11:39.889 clat percentiles (usec): 00:11:39.889 | 1.00th=[ 1778], 5.00th=[ 4359], 10.00th=[ 5997], 20.00th=[ 7570], 00:11:39.889 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[11600], 60.00th=[12125], 00:11:39.889 | 70.00th=[14091], 80.00th=[15795], 90.00th=[24773], 95.00th=[36963], 00:11:39.889 | 99.00th=[57410], 99.50th=[57410], 99.90th=[61604], 99.95th=[61604], 00:11:39.889 | 99.99th=[70779] 00:11:39.889 bw ( KiB/s): min=16008, max=17328, per=21.25%, avg=16668.00, stdev=933.38, samples=2 00:11:39.889 iops : min= 4002, max= 4332, avg=4167.00, stdev=233.35, samples=2 00:11:39.889 lat (usec) : 750=0.02%, 1000=0.02% 00:11:39.889 lat (msec) : 2=0.83%, 4=1.10%, 10=23.65%, 20=57.04%, 50=15.30% 00:11:39.889 lat (msec) : 100=2.03% 00:11:39.889 cpu : usr=4.15%, sys=5.63%, ctx=286, majf=0, minf=1 00:11:39.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:39.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.889 issued rwts: total=4096,4294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.889 job1: (groupid=0, jobs=1): err= 0: pid=3793607: Thu Jul 25 10:26:43 2024 00:11:39.889 read: IOPS=5602, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1005msec) 00:11:39.889 slat (usec): min=2, max=9758, avg=89.85, stdev=638.29 00:11:39.889 clat (usec): min=2411, max=26133, avg=11738.90, stdev=2939.24 00:11:39.889 lat (usec): min=5732, max=26142, avg=11828.76, stdev=2977.75 00:11:39.889 clat percentiles (usec): 00:11:39.889 | 1.00th=[ 7373], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9503], 00:11:39.889 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11600], 00:11:39.889 | 70.00th=[12125], 80.00th=[13304], 90.00th=[15270], 95.00th=[17433], 00:11:39.889 | 99.00th=[22676], 99.50th=[23987], 99.90th=[25560], 99.95th=[25560], 00:11:39.889 | 99.99th=[26084] 00:11:39.889 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:11:39.889 slat (usec): min=3, max=10013, avg=78.90, stdev=476.67 00:11:39.889 clat (usec): min=1915, max=30762, avg=10886.11, stdev=4940.15 00:11:39.889 lat (usec): min=1934, max=30785, avg=10965.01, stdev=4960.52 00:11:39.889 clat percentiles (usec): 00:11:39.889 | 1.00th=[ 3982], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 6980], 00:11:39.889 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[10290], 00:11:39.889 | 70.00th=[11731], 80.00th=[14222], 90.00th=[17957], 95.00th=[21890], 00:11:39.889 | 99.00th=[26084], 99.50th=[27395], 99.90th=[28181], 99.95th=[28705], 00:11:39.889 | 99.99th=[30802] 00:11:39.889 bw ( KiB/s): min=22456, max=22600, per=28.72%, avg=22528.00, stdev=101.82, samples=2 00:11:39.889 iops : min= 5614, max= 5650, avg=5632.00, stdev=25.46, samples=2 00:11:39.890 lat (msec) : 2=0.08%, 4=0.44%, 10=40.15%, 20=54.29%, 50=5.03% 00:11:39.890 cpu : usr=7.07%, sys=7.67%, ctx=414, majf=0, minf=1 00:11:39.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:39.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.890 issued rwts: total=5631,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.890 job2: (groupid=0, jobs=1): err= 0: pid=3793629: Thu Jul 25 10:26:43 2024 00:11:39.890 read: IOPS=5188, BW=20.3MiB/s (21.3MB/s)(20.4MiB/1008msec) 00:11:39.890 slat (usec): min=3, max=10085, avg=92.68, stdev=674.84 00:11:39.890 clat (usec): min=6339, max=26589, avg=12920.38, stdev=2477.97 00:11:39.890 lat (usec): min=7016, max=26602, avg=13013.05, stdev=2529.22 00:11:39.890 clat percentiles (usec): 00:11:39.890 | 1.00th=[ 8455], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11076], 00:11:39.890 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12518], 60.00th=[13042], 00:11:39.890 | 70.00th=[13566], 80.00th=[14222], 90.00th=[16712], 95.00th=[17957], 00:11:39.890 | 99.00th=[20579], 99.50th=[21890], 99.90th=[23462], 99.95th=[26608], 00:11:39.890 | 99.99th=[26608] 00:11:39.890 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:11:39.890 slat (usec): min=3, max=10719, avg=78.67, stdev=584.87 00:11:39.890 clat (usec): min=2124, max=21460, avg=10457.02, stdev=3314.65 00:11:39.890 lat (usec): min=2140, max=28576, avg=10535.69, stdev=3335.72 00:11:39.890 clat percentiles (usec): 00:11:39.890 | 1.00th=[ 3785], 5.00th=[ 6063], 10.00th=[ 6783], 20.00th=[ 7767], 00:11:39.890 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[10028], 60.00th=[10814], 00:11:39.890 | 70.00th=[11600], 80.00th=[12649], 90.00th=[15008], 95.00th=[16712], 00:11:39.890 | 99.00th=[21103], 99.50th=[21103], 99.90th=[21365], 99.95th=[21365], 00:11:39.890 | 99.99th=[21365] 00:11:39.890 bw ( KiB/s): min=21848, max=23072, per=28.63%, avg=22460.00, stdev=865.50, samples=2 00:11:39.890 iops : min= 5462, max= 5768, avg=5615.00, stdev=216.37, samples=2 00:11:39.890 lat (msec) : 4=0.75%, 10=29.64%, 20=68.28%, 50=1.33% 00:11:39.890 cpu : usr=6.65%, sys=9.83%, ctx=313, majf=0, minf=1 00:11:39.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:39.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.890 issued rwts: total=5230,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.890 job3: (groupid=0, jobs=1): err= 0: pid=3793637: Thu Jul 25 10:26:43 2024 00:11:39.890 read: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec) 00:11:39.890 slat (nsec): min=1746, max=20539k, avg=116864.71, stdev=856489.33 00:11:39.890 clat (usec): min=4791, max=48494, avg=16566.76, stdev=7196.34 00:11:39.890 lat (usec): min=5476, max=48501, avg=16683.62, stdev=7244.15 00:11:39.890 clat percentiles (usec): 00:11:39.890 | 1.00th=[ 5932], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[11469], 00:11:39.890 | 30.00th=[12518], 40.00th=[13304], 50.00th=[14222], 60.00th=[15270], 00:11:39.890 | 70.00th=[17171], 80.00th=[20317], 90.00th=[27919], 95.00th=[32375], 00:11:39.890 | 99.00th=[40633], 99.50th=[40633], 99.90th=[45351], 99.95th=[45351], 00:11:39.890 | 99.99th=[48497] 00:11:39.890 write: IOPS=4266, BW=16.7MiB/s (17.5MB/s)(16.9MiB/1014msec); 0 zone resets 00:11:39.890 slat (usec): min=2, max=18326, avg=85.34, stdev=666.32 00:11:39.890 clat (usec): min=1399, max=57119, avg=14085.66, stdev=7979.17 00:11:39.890 lat (usec): min=1411, max=57130, avg=14171.00, stdev=8019.64 00:11:39.890 clat percentiles (usec): 00:11:39.890 | 1.00th=[ 2343], 5.00th=[ 5145], 10.00th=[ 7046], 20.00th=[ 8848], 00:11:39.890 | 30.00th=[ 9765], 40.00th=[11076], 50.00th=[12125], 60.00th=[13304], 00:11:39.890 | 70.00th=[15533], 80.00th=[17957], 90.00th=[22414], 95.00th=[30278], 00:11:39.890 | 99.00th=[48497], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:11:39.890 | 99.99th=[56886] 00:11:39.890 bw ( KiB/s): min=16384, max=17208, per=21.41%, avg=16796.00, stdev=582.66, samples=2 00:11:39.890 iops : min= 4096, max= 4302, avg=4199.00, stdev=145.66, samples=2 00:11:39.890 lat (msec) : 2=0.07%, 4=1.20%, 10=18.19%, 20=62.70%, 50=17.81% 00:11:39.890 lat (msec) : 100=0.02% 00:11:39.890 cpu : usr=3.85%, sys=5.53%, ctx=321, majf=0, minf=1 00:11:39.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:39.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.890 issued rwts: total=4096,4326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.890 00:11:39.890 Run status group 0 (all jobs): 00:11:39.890 READ: bw=73.4MiB/s (77.0MB/s), 15.8MiB/s-21.9MiB/s (16.5MB/s-22.9MB/s), io=74.4MiB (78.0MB), run=1005-1014msec 00:11:39.890 WRITE: bw=76.6MiB/s (80.3MB/s), 16.5MiB/s-21.9MiB/s (17.3MB/s-23.0MB/s), io=77.7MiB (81.4MB), run=1005-1014msec 00:11:39.890 00:11:39.890 Disk stats (read/write): 00:11:39.890 nvme0n1: ios=3634/3911, merge=0/0, ticks=50504/50262, in_queue=100766, util=97.09% 00:11:39.890 nvme0n2: ios=4310/4608, merge=0/0, ticks=48453/51102, in_queue=99555, util=84.41% 00:11:39.890 nvme0n3: ios=4153/4511, merge=0/0, ticks=52540/46189, in_queue=98729, util=99.36% 00:11:39.890 nvme0n4: ios=3326/3584, merge=0/0, ticks=34761/33657, in_queue=68418, util=89.36% 00:11:39.890 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:39.890 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3793692 00:11:39.890 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:39.890 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:39.890 [global] 00:11:39.890 thread=1 00:11:39.890 invalidate=1 00:11:39.890 rw=read 00:11:39.890 time_based=1 00:11:39.890 runtime=10 00:11:39.890 ioengine=libaio 00:11:39.890 direct=1 00:11:39.890 bs=4096 00:11:39.890 iodepth=1 00:11:39.890 norandommap=1 00:11:39.890 numjobs=1 00:11:39.890 00:11:39.890 [job0] 00:11:39.890 filename=/dev/nvme0n1 00:11:39.890 [job1] 00:11:39.890 filename=/dev/nvme0n2 00:11:39.890 [job2] 00:11:39.890 filename=/dev/nvme0n3 00:11:39.890 [job3] 00:11:39.890 filename=/dev/nvme0n4 00:11:39.890 Could not set queue depth (nvme0n1) 00:11:39.890 Could not set queue depth (nvme0n2) 00:11:39.890 Could not set queue depth (nvme0n3) 00:11:39.890 Could not set queue depth (nvme0n4) 00:11:40.148 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.148 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.148 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.148 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.148 fio-3.35 00:11:40.148 Starting 4 threads 00:11:42.678 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:42.936 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=21622784, buflen=4096 00:11:42.936 fio: pid=3794060, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:42.936 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:42.936 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=4136960, buflen=4096 00:11:42.936 fio: pid=3794053, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:42.936 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:42.936 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:43.194 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:43.194 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:43.194 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=303104, buflen=4096 00:11:43.194 fio: pid=3794015, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:43.451 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=24141824, buflen=4096 00:11:43.451 fio: pid=3794032, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:43.451 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:43.451 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:43.451 00:11:43.451 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3794015: Thu Jul 25 10:26:47 2024 00:11:43.451 read: IOPS=24, BW=97.6KiB/s (100.0kB/s)(296KiB/3032msec) 00:11:43.451 slat (usec): min=10, max=13743, avg=203.27, stdev=1584.61 00:11:43.451 clat (usec): min=740, max=41951, avg=40477.73, stdev=4686.99 00:11:43.451 lat (usec): min=775, max=55098, avg=40683.40, stdev=4982.71 00:11:43.451 clat percentiles (usec): 00:11:43.451 | 1.00th=[ 742], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:43.451 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:43.451 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:43.451 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:43.451 | 99.99th=[42206] 00:11:43.451 bw ( KiB/s): min= 96, max= 104, per=0.65%, avg=99.20, stdev= 4.38, samples=5 00:11:43.451 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:11:43.451 lat (usec) : 750=1.33% 00:11:43.451 lat (msec) : 50=97.33% 00:11:43.452 cpu : usr=0.00%, sys=0.10%, ctx=76, majf=0, minf=1 00:11:43.452 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:43.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.452 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.452 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:43.452 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:43.452 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3794032: Thu Jul 25 10:26:47 2024 00:11:43.452 read: IOPS=1839, BW=7356KiB/s (7533kB/s)(23.0MiB/3205msec) 00:11:43.452 slat (usec): min=6, max=15325, avg=20.97, stdev=331.58 00:11:43.452 clat (usec): min=398, max=766, avg=516.77, stdev=61.74 00:11:43.452 lat (usec): min=407, max=15940, avg=537.74, stdev=341.58 00:11:43.452 clat percentiles (usec): 00:11:43.452 | 1.00th=[ 420], 5.00th=[ 441], 10.00th=[ 457], 20.00th=[ 478], 00:11:43.452 | 30.00th=[ 486], 40.00th=[ 490], 50.00th=[ 498], 60.00th=[ 506], 00:11:43.452 | 70.00th=[ 529], 80.00th=[ 562], 90.00th=[ 627], 95.00th=[ 652], 00:11:43.452 | 99.00th=[ 693], 99.50th=[ 717], 99.90th=[ 742], 99.95th=[ 750], 00:11:43.452 | 99.99th=[ 766] 00:11:43.452 bw ( KiB/s): min= 6976, max= 8032, per=48.27%, avg=7384.83, stdev=412.15, samples=6 00:11:43.452 iops : min= 1744, max= 2008, avg=1846.17, stdev=103.03, samples=6 00:11:43.452 lat (usec) : 500=53.77%, 750=46.16%, 1000=0.05% 00:11:43.452 cpu : usr=0.56%, sys=2.72%, ctx=5903, majf=0, minf=1 00:11:43.452 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:43.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.452 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.452 issued rwts: total=5895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:43.452 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:43.452 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3794053: Thu Jul 25 10:26:47 2024 00:11:43.452 read: IOPS=359, BW=1437KiB/s (1472kB/s)(4040KiB/2811msec) 00:11:43.452 slat (nsec): min=9114, max=63205, avg=11243.96, stdev=3984.16 00:11:43.452 clat (usec): min=373, max=42077, avg=2748.88, stdev=9274.36 00:11:43.452 lat (usec): min=383, max=42110, avg=2760.11, stdev=9277.66 00:11:43.452 clat percentiles (usec): 00:11:43.452 | 1.00th=[ 396], 5.00th=[ 437], 10.00th=[ 482], 20.00th=[ 494], 00:11:43.452 | 30.00th=[ 498], 40.00th=[ 502], 50.00th=[ 506], 60.00th=[ 510], 00:11:43.452 | 70.00th=[ 515], 80.00th=[ 523], 90.00th=[ 537], 95.00th=[40633], 00:11:43.452 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:11:43.452 | 99.99th=[42206] 00:11:43.452 bw ( KiB/s): min= 104, max= 6664, per=10.49%, avg=1604.80, stdev=2856.91, samples=5 00:11:43.452 iops : min= 26, max= 1666, avg=401.20, stdev=714.23, samples=5 00:11:43.452 lat (usec) : 500=35.51%, 750=58.85% 00:11:43.452 lat (msec) : 50=5.54% 00:11:43.452 cpu : usr=0.28%, sys=0.68%, ctx=1015, majf=0, minf=1 00:11:43.452 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:43.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.452 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.452 issued rwts: total=1011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:43.452 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:43.452 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3794060: Thu Jul 25 10:26:47 2024 00:11:43.452 read: IOPS=2009, BW=8035KiB/s (8228kB/s)(20.6MiB/2628msec) 00:11:43.452 slat (nsec): min=8585, max=44152, avg=9594.30, stdev=1606.23 00:11:43.452 clat (usec): min=378, max=775, avg=481.88, stdev=29.42 00:11:43.452 lat (usec): min=387, max=803, avg=491.47, stdev=29.87 00:11:43.452 clat percentiles (usec): 00:11:43.452 | 1.00th=[ 408], 5.00th=[ 441], 10.00th=[ 449], 20.00th=[ 461], 00:11:43.452 | 30.00th=[ 465], 40.00th=[ 474], 50.00th=[ 482], 60.00th=[ 490], 00:11:43.452 | 70.00th=[ 498], 80.00th=[ 506], 90.00th=[ 515], 95.00th=[ 523], 00:11:43.452 | 99.00th=[ 545], 99.50th=[ 603], 99.90th=[ 701], 99.95th=[ 717], 00:11:43.452 | 99.99th=[ 775] 00:11:43.452 bw ( KiB/s): min= 7720, max= 8400, per=53.13%, avg=8128.00, stdev=304.68, samples=5 00:11:43.452 iops : min= 1930, max= 2100, avg=2032.00, stdev=76.17, samples=5 00:11:43.452 lat (usec) : 500=73.84%, 750=26.12%, 1000=0.02% 00:11:43.452 cpu : usr=1.18%, sys=2.47%, ctx=5281, majf=0, minf=2 00:11:43.452 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:43.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.452 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.452 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:43.452 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:43.452 00:11:43.452 Run status group 0 (all jobs): 00:11:43.452 READ: bw=14.9MiB/s (15.7MB/s), 97.6KiB/s-8035KiB/s (100.0kB/s-8228kB/s), io=47.9MiB (50.2MB), run=2628-3205msec 00:11:43.452 00:11:43.452 Disk stats (read/write): 00:11:43.452 nvme0n1: ios=69/0, merge=0/0, ticks=2789/0, in_queue=2789, util=93.96% 00:11:43.452 nvme0n2: ios=5708/0, merge=0/0, ticks=2994/0, in_queue=2994, util=95.44% 00:11:43.452 nvme0n3: ios=1044/0, merge=0/0, ticks=3411/0, in_queue=3411, util=99.40% 00:11:43.452 nvme0n4: ios=5228/0, merge=0/0, ticks=2482/0, in_queue=2482, util=96.44% 00:11:43.710 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:43.710 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:43.710 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:43.710 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:43.968 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:43.968 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:44.226 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:44.226 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:44.226 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:44.226 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3793692 00:11:44.226 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:44.226 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:44.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.484 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:44.484 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:44.484 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:44.485 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.485 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:44.485 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.485 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:44.485 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:44.485 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:44.485 nvmf hotplug test: fio failed as expected 00:11:44.485 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:44.769 rmmod nvme_tcp 00:11:44.769 rmmod nvme_fabrics 00:11:44.769 rmmod nvme_keyring 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3790800 ']' 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3790800 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3790800 ']' 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3790800 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3790800 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3790800' 00:11:44.769 killing process with pid 3790800 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3790800 00:11:44.769 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3790800 00:11:45.036 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:45.036 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:45.036 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:45.036 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:45.036 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:45.036 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.036 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.036 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.936 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:46.936 00:11:46.936 real 0m28.193s 00:11:46.936 user 2m2.618s 00:11:46.936 sys 0m10.285s 00:11:46.936 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.936 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.936 ************************************ 00:11:46.936 END TEST nvmf_fio_target 00:11:46.936 ************************************ 00:11:47.195 10:26:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:47.196 ************************************ 00:11:47.196 START TEST nvmf_bdevio 00:11:47.196 ************************************ 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:47.196 * Looking for test storage... 00:11:47.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:47.196 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:55.314 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:55.315 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:55.315 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:55.315 Found net devices under 0000:af:00.0: cvl_0_0 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:55.315 Found net devices under 0000:af:00.1: cvl_0_1 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:55.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:11:55.315 00:11:55.315 --- 10.0.0.2 ping statistics --- 00:11:55.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.315 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:55.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:11:55.315 00:11:55.315 --- 10.0.0.1 ping statistics --- 00:11:55.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.315 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3798538 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3798538 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3798538 ']' 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:55.315 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.315 [2024-07-25 10:26:57.917728] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:11:55.315 [2024-07-25 10:26:57.917780] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.315 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.315 [2024-07-25 10:26:57.989901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.315 [2024-07-25 10:26:58.065612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.315 [2024-07-25 10:26:58.065651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.315 [2024-07-25 10:26:58.065660] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.315 [2024-07-25 10:26:58.065669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.315 [2024-07-25 10:26:58.065676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.315 [2024-07-25 10:26:58.065790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:55.315 [2024-07-25 10:26:58.065898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:55.315 [2024-07-25 10:26:58.066008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.315 [2024-07-25 10:26:58.066009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.316 [2024-07-25 10:26:58.767836] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.316 Malloc0 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.316 [2024-07-25 10:26:58.822367] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:55.316 { 00:11:55.316 "params": { 00:11:55.316 "name": "Nvme$subsystem", 00:11:55.316 "trtype": "$TEST_TRANSPORT", 00:11:55.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:55.316 "adrfam": "ipv4", 00:11:55.316 "trsvcid": "$NVMF_PORT", 00:11:55.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:55.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:55.316 "hdgst": ${hdgst:-false}, 00:11:55.316 "ddgst": ${ddgst:-false} 00:11:55.316 }, 00:11:55.316 "method": "bdev_nvme_attach_controller" 00:11:55.316 } 00:11:55.316 EOF 00:11:55.316 )") 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:55.316 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:55.316 "params": { 00:11:55.316 "name": "Nvme1", 00:11:55.316 "trtype": "tcp", 00:11:55.316 "traddr": "10.0.0.2", 00:11:55.316 "adrfam": "ipv4", 00:11:55.316 "trsvcid": "4420", 00:11:55.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:55.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:55.316 "hdgst": false, 00:11:55.316 "ddgst": false 00:11:55.316 }, 00:11:55.316 "method": "bdev_nvme_attach_controller" 00:11:55.316 }' 00:11:55.316 [2024-07-25 10:26:58.875361] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:11:55.316 [2024-07-25 10:26:58.875405] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3798614 ] 00:11:55.316 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.316 [2024-07-25 10:26:58.945087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:55.316 [2024-07-25 10:26:59.016237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.316 [2024-07-25 10:26:59.016330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.316 [2024-07-25 10:26:59.016333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.574 I/O targets: 00:11:55.574 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:55.574 00:11:55.574 00:11:55.574 CUnit - A unit testing framework for C - Version 2.1-3 00:11:55.574 http://cunit.sourceforge.net/ 00:11:55.574 00:11:55.574 00:11:55.574 Suite: bdevio tests on: Nvme1n1 00:11:55.574 Test: blockdev write read block ...passed 00:11:55.574 Test: blockdev write zeroes read block ...passed 00:11:55.574 Test: blockdev write zeroes read no split ...passed 00:11:55.831 Test: blockdev write zeroes read split ...passed 00:11:55.832 Test: blockdev write zeroes read split partial ...passed 00:11:55.832 Test: blockdev reset ...[2024-07-25 10:26:59.385391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:55.832 [2024-07-25 10:26:59.385455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca810 (9): Bad file descriptor 00:11:55.832 [2024-07-25 10:26:59.400317] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:55.832 passed 00:11:55.832 Test: blockdev write read 8 blocks ...passed 00:11:55.832 Test: blockdev write read size > 128k ...passed 00:11:55.832 Test: blockdev write read invalid size ...passed 00:11:55.832 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:55.832 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:55.832 Test: blockdev write read max offset ...passed 00:11:56.089 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:56.089 Test: blockdev writev readv 8 blocks ...passed 00:11:56.089 Test: blockdev writev readv 30 x 1block ...passed 00:11:56.089 Test: blockdev writev readv block ...passed 00:11:56.089 Test: blockdev writev readv size > 128k ...passed 00:11:56.089 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:56.089 Test: blockdev comparev and writev ...[2024-07-25 10:26:59.656671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.089 [2024-07-25 10:26:59.656700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:56.089 [2024-07-25 10:26:59.656721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.089 [2024-07-25 10:26:59.656732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:56.089 [2024-07-25 10:26:59.657054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.089 [2024-07-25 10:26:59.657067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:56.089 [2024-07-25 10:26:59.657081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.089 [2024-07-25 10:26:59.657091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:56.089 [2024-07-25 10:26:59.657408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.089 [2024-07-25 10:26:59.657421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:56.089 [2024-07-25 10:26:59.657439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.089 [2024-07-25 10:26:59.657450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:56.090 [2024-07-25 10:26:59.657765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.090 [2024-07-25 10:26:59.657778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:56.090 [2024-07-25 10:26:59.657793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.090 [2024-07-25 10:26:59.657803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:56.090 passed 00:11:56.090 Test: blockdev nvme passthru rw ...passed 00:11:56.090 Test: blockdev nvme passthru vendor specific ...[2024-07-25 10:26:59.740152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:56.090 [2024-07-25 10:26:59.740171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:56.090 [2024-07-25 10:26:59.740360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:56.090 [2024-07-25 10:26:59.740373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:56.090 [2024-07-25 10:26:59.740563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:56.090 [2024-07-25 10:26:59.740576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:56.090 [2024-07-25 10:26:59.740765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:56.090 [2024-07-25 10:26:59.740778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:56.090 passed 00:11:56.090 Test: blockdev nvme admin passthru ...passed 00:11:56.348 Test: blockdev copy ...passed 00:11:56.348 00:11:56.348 Run Summary: Type Total Ran Passed Failed Inactive 00:11:56.348 suites 1 1 n/a 0 0 00:11:56.348 tests 23 23 23 0 0 00:11:56.348 asserts 152 152 152 0 n/a 00:11:56.348 00:11:56.348 Elapsed time = 1.252 seconds 00:11:56.348 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.348 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.348 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:56.348 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.348 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:56.348 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:56.348 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:56.348 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:56.348 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:56.348 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:56.348 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.348 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:56.348 rmmod nvme_tcp 00:11:56.348 rmmod nvme_fabrics 00:11:56.348 rmmod nvme_keyring 00:11:56.348 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.348 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:56.348 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:56.348 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3798538 ']' 00:11:56.348 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3798538 00:11:56.348 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3798538 ']' 00:11:56.348 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3798538 00:11:56.348 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:56.348 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:56.348 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3798538 00:11:56.606 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:56.606 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:56.606 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3798538' 00:11:56.606 killing process with pid 3798538 00:11:56.606 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3798538 00:11:56.607 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3798538 00:11:56.607 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:56.607 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:56.607 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:56.607 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:56.607 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:56.607 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.607 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.607 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:59.141 00:11:59.141 real 0m11.660s 00:11:59.141 user 0m12.898s 00:11:59.141 sys 0m5.915s 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:59.141 ************************************ 00:11:59.141 END TEST nvmf_bdevio 00:11:59.141 ************************************ 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:59.141 00:11:59.141 real 4m53.603s 00:11:59.141 user 10m46.119s 00:11:59.141 sys 1m59.782s 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:59.141 ************************************ 00:11:59.141 END TEST nvmf_target_core 00:11:59.141 ************************************ 00:11:59.141 10:27:02 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:59.141 10:27:02 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:59.141 10:27:02 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.141 10:27:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:59.141 ************************************ 00:11:59.141 START TEST nvmf_target_extra 00:11:59.141 ************************************ 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:59.141 * Looking for test storage... 00:11:59.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.141 10:27:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.142 ************************************ 00:11:59.142 START TEST nvmf_example 00:11:59.142 ************************************ 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:59.142 * Looking for test storage... 00:11:59.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.142 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.143 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:59.143 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:59.143 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:59.143 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:07.259 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:07.259 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:07.259 Found net devices under 0000:af:00.0: cvl_0_0 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:07.259 Found net devices under 0000:af:00.1: cvl_0_1 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:07.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:12:07.259 00:12:07.259 --- 10.0.0.2 ping statistics --- 00:12:07.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.259 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:12:07.259 00:12:07.259 --- 10.0.0.1 ping statistics --- 00:12:07.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.259 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:12:07.259 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3803151 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3803151 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 3803151 ']' 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:07.260 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.260 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:07.260 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:07.260 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.479 Initializing NVMe Controllers 00:12:19.479 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:19.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:19.479 Initialization complete. Launching workers. 00:12:19.479 ======================================================== 00:12:19.479 Latency(us) 00:12:19.479 Device Information : IOPS MiB/s Average min max 00:12:19.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16763.20 65.48 3817.77 652.91 18343.66 00:12:19.479 ======================================================== 00:12:19.479 Total : 16763.20 65.48 3817.77 652.91 18343.66 00:12:19.479 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.479 rmmod nvme_tcp 00:12:19.479 rmmod nvme_fabrics 00:12:19.479 rmmod nvme_keyring 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3803151 ']' 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3803151 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 3803151 ']' 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 3803151 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.479 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3803151 00:12:19.480 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:12:19.480 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:12:19.480 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3803151' 00:12:19.480 killing process with pid 3803151 00:12:19.480 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 3803151 00:12:19.480 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 3803151 00:12:19.480 nvmf threads initialize successfully 00:12:19.480 bdev subsystem init successfully 00:12:19.480 created a nvmf target service 00:12:19.480 create targets's poll groups done 00:12:19.480 all subsystems of target started 00:12:19.480 nvmf target is running 00:12:19.480 all subsystems of target stopped 00:12:19.480 destroy targets's poll groups done 00:12:19.480 destroyed the nvmf target service 00:12:19.480 bdev subsystem finish successfully 00:12:19.480 nvmf threads destroy successfully 00:12:19.480 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.480 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:19.480 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:19.480 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.480 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.480 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.480 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.480 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.049 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:20.049 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:20.049 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:20.049 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.049 00:12:20.049 real 0m20.923s 00:12:20.050 user 0m46.146s 00:12:20.050 sys 0m7.412s 00:12:20.050 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.050 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.050 ************************************ 00:12:20.050 END TEST nvmf_example 00:12:20.050 ************************************ 00:12:20.050 10:27:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:20.050 10:27:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:20.050 10:27:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.050 10:27:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.050 ************************************ 00:12:20.050 START TEST nvmf_filesystem 00:12:20.050 ************************************ 00:12:20.050 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:20.356 * Looking for test storage... 00:12:20.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:20.356 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:20.357 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:20.357 #define SPDK_CONFIG_H 00:12:20.357 #define SPDK_CONFIG_APPS 1 00:12:20.357 #define SPDK_CONFIG_ARCH native 00:12:20.357 #undef SPDK_CONFIG_ASAN 00:12:20.357 #undef SPDK_CONFIG_AVAHI 00:12:20.357 #undef SPDK_CONFIG_CET 00:12:20.357 #define SPDK_CONFIG_COVERAGE 1 00:12:20.357 #define SPDK_CONFIG_CROSS_PREFIX 00:12:20.357 #undef SPDK_CONFIG_CRYPTO 00:12:20.357 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:20.357 #undef SPDK_CONFIG_CUSTOMOCF 00:12:20.357 #undef SPDK_CONFIG_DAOS 00:12:20.357 #define SPDK_CONFIG_DAOS_DIR 00:12:20.357 #define SPDK_CONFIG_DEBUG 1 00:12:20.357 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:20.357 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:20.357 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:20.357 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:20.357 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:20.357 #undef SPDK_CONFIG_DPDK_UADK 00:12:20.357 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:20.357 #define SPDK_CONFIG_EXAMPLES 1 00:12:20.357 #undef SPDK_CONFIG_FC 00:12:20.358 #define SPDK_CONFIG_FC_PATH 00:12:20.358 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:20.358 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:20.358 #undef SPDK_CONFIG_FUSE 00:12:20.358 #undef SPDK_CONFIG_FUZZER 00:12:20.358 #define SPDK_CONFIG_FUZZER_LIB 00:12:20.358 #undef SPDK_CONFIG_GOLANG 00:12:20.358 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:20.358 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:20.358 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:20.358 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:20.358 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:20.358 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:20.358 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:20.358 #define SPDK_CONFIG_IDXD 1 00:12:20.358 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:20.358 #undef SPDK_CONFIG_IPSEC_MB 00:12:20.358 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:20.358 #define SPDK_CONFIG_ISAL 1 00:12:20.358 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:20.358 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:20.358 #define SPDK_CONFIG_LIBDIR 00:12:20.358 #undef SPDK_CONFIG_LTO 00:12:20.358 #define SPDK_CONFIG_MAX_LCORES 128 00:12:20.358 #define SPDK_CONFIG_NVME_CUSE 1 00:12:20.358 #undef SPDK_CONFIG_OCF 00:12:20.358 #define SPDK_CONFIG_OCF_PATH 00:12:20.358 #define SPDK_CONFIG_OPENSSL_PATH 00:12:20.358 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:20.358 #define SPDK_CONFIG_PGO_DIR 00:12:20.358 #undef SPDK_CONFIG_PGO_USE 00:12:20.358 #define SPDK_CONFIG_PREFIX /usr/local 00:12:20.358 #undef SPDK_CONFIG_RAID5F 00:12:20.358 #undef SPDK_CONFIG_RBD 00:12:20.358 #define SPDK_CONFIG_RDMA 1 00:12:20.358 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:20.358 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:20.358 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:20.358 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:20.358 #define SPDK_CONFIG_SHARED 1 00:12:20.358 #undef SPDK_CONFIG_SMA 00:12:20.358 #define SPDK_CONFIG_TESTS 1 00:12:20.358 #undef SPDK_CONFIG_TSAN 00:12:20.358 #define SPDK_CONFIG_UBLK 1 00:12:20.358 #define SPDK_CONFIG_UBSAN 1 00:12:20.358 #undef SPDK_CONFIG_UNIT_TESTS 00:12:20.358 #undef SPDK_CONFIG_URING 00:12:20.358 #define SPDK_CONFIG_URING_PATH 00:12:20.358 #undef SPDK_CONFIG_URING_ZNS 00:12:20.358 #undef SPDK_CONFIG_USDT 00:12:20.358 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:20.358 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:20.358 #define SPDK_CONFIG_VFIO_USER 1 00:12:20.358 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:20.358 #define SPDK_CONFIG_VHOST 1 00:12:20.358 #define SPDK_CONFIG_VIRTIO 1 00:12:20.358 #undef SPDK_CONFIG_VTUNE 00:12:20.358 #define SPDK_CONFIG_VTUNE_DIR 00:12:20.358 #define SPDK_CONFIG_WERROR 1 00:12:20.358 #define SPDK_CONFIG_WPDK_DIR 00:12:20.358 #undef SPDK_CONFIG_XNVME 00:12:20.358 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:20.358 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:12:20.359 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:20.360 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 3805636 ]] 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 3805636 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.M5nQuO 00:12:20.361 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.M5nQuO/tests/target /tmp/spdk.M5nQuO 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=955215872 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4329213952 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=55353155584 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742276608 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6389121024 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30861217792 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12325425152 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348456960 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23031808 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30870216704 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=921600 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6174220288 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174224384 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:12:20.362 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:12:20.363 * Looking for test storage... 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=55353155584 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=8603713536 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.363 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:20.364 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:26.935 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:26.935 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:26.935 Found net devices under 0000:af:00.0: cvl_0_0 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:26.935 Found net devices under 0000:af:00.1: cvl_0_1 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.935 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:12:26.936 00:12:26.936 --- 10.0.0.2 ping statistics --- 00:12:26.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.936 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:12:26.936 00:12:26.936 --- 10.0.0.1 ping statistics --- 00:12:26.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.936 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.936 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:27.194 ************************************ 00:12:27.194 START TEST nvmf_filesystem_no_in_capsule 00:12:27.194 ************************************ 00:12:27.194 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:12:27.194 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:27.194 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:27.194 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:27.194 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:27.194 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.194 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3808886 00:12:27.194 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3808886 00:12:27.194 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3808886 ']' 00:12:27.194 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.194 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.194 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:27.194 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.194 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:27.194 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.194 [2024-07-25 10:27:30.714209] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:12:27.194 [2024-07-25 10:27:30.714253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.194 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.194 [2024-07-25 10:27:30.787564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.194 [2024-07-25 10:27:30.864074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.194 [2024-07-25 10:27:30.864114] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.195 [2024-07-25 10:27:30.864124] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.195 [2024-07-25 10:27:30.864133] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.195 [2024-07-25 10:27:30.864140] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.195 [2024-07-25 10:27:30.864187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.195 [2024-07-25 10:27:30.864284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.195 [2024-07-25 10:27:30.864301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.195 [2024-07-25 10:27:30.864302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.128 [2024-07-25 10:27:31.571926] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.128 Malloc1 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.128 [2024-07-25 10:27:31.715835] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:28.128 { 00:12:28.128 "name": "Malloc1", 00:12:28.128 "aliases": [ 00:12:28.128 "21a7e741-2a8f-4f6a-87e6-73b7715d88fc" 00:12:28.128 ], 00:12:28.128 "product_name": "Malloc disk", 00:12:28.128 "block_size": 512, 00:12:28.128 "num_blocks": 1048576, 00:12:28.128 "uuid": "21a7e741-2a8f-4f6a-87e6-73b7715d88fc", 00:12:28.128 "assigned_rate_limits": { 00:12:28.128 "rw_ios_per_sec": 0, 00:12:28.128 "rw_mbytes_per_sec": 0, 00:12:28.128 "r_mbytes_per_sec": 0, 00:12:28.128 "w_mbytes_per_sec": 0 00:12:28.128 }, 00:12:28.128 "claimed": true, 00:12:28.128 "claim_type": "exclusive_write", 00:12:28.128 "zoned": false, 00:12:28.128 "supported_io_types": { 00:12:28.128 "read": true, 00:12:28.128 "write": true, 00:12:28.128 "unmap": true, 00:12:28.128 "flush": true, 00:12:28.128 "reset": true, 00:12:28.128 "nvme_admin": false, 00:12:28.128 "nvme_io": false, 00:12:28.128 "nvme_io_md": false, 00:12:28.128 "write_zeroes": true, 00:12:28.128 "zcopy": true, 00:12:28.128 "get_zone_info": false, 00:12:28.128 "zone_management": false, 00:12:28.128 "zone_append": false, 00:12:28.128 "compare": false, 00:12:28.128 "compare_and_write": false, 00:12:28.128 "abort": true, 00:12:28.128 "seek_hole": false, 00:12:28.128 "seek_data": false, 00:12:28.128 "copy": true, 00:12:28.128 "nvme_iov_md": false 00:12:28.128 }, 00:12:28.128 "memory_domains": [ 00:12:28.128 { 00:12:28.128 "dma_device_id": "system", 00:12:28.128 "dma_device_type": 1 00:12:28.128 }, 00:12:28.128 { 00:12:28.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.128 "dma_device_type": 2 00:12:28.128 } 00:12:28.128 ], 00:12:28.128 "driver_specific": {} 00:12:28.128 } 00:12:28.128 ]' 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:28.128 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:28.387 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:28.387 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:28.387 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:28.387 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:28.387 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.768 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.768 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:29.768 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.768 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:29.768 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:31.670 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:31.929 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:32.187 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.121 ************************************ 00:12:33.121 START TEST filesystem_ext4 00:12:33.121 ************************************ 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:33.121 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:33.121 mke2fs 1.46.5 (30-Dec-2021) 00:12:33.121 Discarding device blocks: 0/522240 done 00:12:33.121 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:33.121 Filesystem UUID: 46c040d8-acc8-4ee1-800c-f646e66a9a8b 00:12:33.121 Superblock backups stored on blocks: 00:12:33.121 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:33.121 00:12:33.121 Allocating group tables: 0/64 done 00:12:33.121 Writing inode tables: 0/64 done 00:12:33.379 Creating journal (8192 blocks): done 00:12:34.340 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:12:34.340 00:12:34.340 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:34.340 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3808886 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:35.275 00:12:35.275 real 0m2.104s 00:12:35.275 user 0m0.027s 00:12:35.275 sys 0m0.083s 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:35.275 ************************************ 00:12:35.275 END TEST filesystem_ext4 00:12:35.275 ************************************ 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.275 ************************************ 00:12:35.275 START TEST filesystem_btrfs 00:12:35.275 ************************************ 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:35.275 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:35.534 btrfs-progs v6.6.2 00:12:35.534 See https://btrfs.readthedocs.io for more information. 00:12:35.534 00:12:35.534 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:35.534 NOTE: several default settings have changed in version 5.15, please make sure 00:12:35.534 this does not affect your deployments: 00:12:35.534 - DUP for metadata (-m dup) 00:12:35.534 - enabled no-holes (-O no-holes) 00:12:35.534 - enabled free-space-tree (-R free-space-tree) 00:12:35.534 00:12:35.534 Label: (null) 00:12:35.534 UUID: ed4a1754-8a1c-415c-802e-f9b448d93193 00:12:35.534 Node size: 16384 00:12:35.534 Sector size: 4096 00:12:35.534 Filesystem size: 510.00MiB 00:12:35.534 Block group profiles: 00:12:35.534 Data: single 8.00MiB 00:12:35.534 Metadata: DUP 32.00MiB 00:12:35.534 System: DUP 8.00MiB 00:12:35.534 SSD detected: yes 00:12:35.534 Zoned device: no 00:12:35.534 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:35.534 Runtime features: free-space-tree 00:12:35.534 Checksum: crc32c 00:12:35.534 Number of devices: 1 00:12:35.534 Devices: 00:12:35.534 ID SIZE PATH 00:12:35.534 1 510.00MiB /dev/nvme0n1p1 00:12:35.534 00:12:35.534 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:35.534 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:35.792 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3808886 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:36.052 00:12:36.052 real 0m0.675s 00:12:36.052 user 0m0.032s 00:12:36.052 sys 0m0.138s 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:36.052 ************************************ 00:12:36.052 END TEST filesystem_btrfs 00:12:36.052 ************************************ 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.052 ************************************ 00:12:36.052 START TEST filesystem_xfs 00:12:36.052 ************************************ 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:36.052 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:36.053 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:36.053 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:36.053 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:36.053 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:36.053 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:36.053 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:36.053 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:36.053 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:36.053 = sectsz=512 attr=2, projid32bit=1 00:12:36.053 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:36.053 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:36.053 data = bsize=4096 blocks=130560, imaxpct=25 00:12:36.053 = sunit=0 swidth=0 blks 00:12:36.053 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:36.053 log =internal log bsize=4096 blocks=16384, version=2 00:12:36.053 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:36.053 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:36.989 Discarding blocks...Done. 00:12:36.989 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:36.989 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:39.521 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:39.521 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:39.521 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:39.521 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:39.521 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:39.521 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:39.521 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3808886 00:12:39.521 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:39.521 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:39.521 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:39.521 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:39.521 00:12:39.521 real 0m3.391s 00:12:39.521 user 0m0.032s 00:12:39.521 sys 0m0.083s 00:12:39.521 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.521 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:39.521 ************************************ 00:12:39.521 END TEST filesystem_xfs 00:12:39.521 ************************************ 00:12:39.521 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:39.780 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:39.780 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.780 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.780 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:39.780 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:39.780 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.780 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:39.780 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.038 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:40.038 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.038 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.038 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.038 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.038 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:40.038 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3808886 00:12:40.038 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3808886 ']' 00:12:40.038 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3808886 00:12:40.038 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:40.038 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:40.038 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3808886 00:12:40.038 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:40.039 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:40.039 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3808886' 00:12:40.039 killing process with pid 3808886 00:12:40.039 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3808886 00:12:40.039 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3808886 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:40.298 00:12:40.298 real 0m13.238s 00:12:40.298 user 0m51.678s 00:12:40.298 sys 0m1.820s 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.298 ************************************ 00:12:40.298 END TEST nvmf_filesystem_no_in_capsule 00:12:40.298 ************************************ 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:40.298 ************************************ 00:12:40.298 START TEST nvmf_filesystem_in_capsule 00:12:40.298 ************************************ 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3811327 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3811327 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3811327 ']' 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:40.298 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.558 [2024-07-25 10:27:44.045742] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:12:40.558 [2024-07-25 10:27:44.045789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.558 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.558 [2024-07-25 10:27:44.118043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.558 [2024-07-25 10:27:44.188169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.558 [2024-07-25 10:27:44.188209] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.558 [2024-07-25 10:27:44.188218] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.558 [2024-07-25 10:27:44.188227] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.558 [2024-07-25 10:27:44.188234] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.558 [2024-07-25 10:27:44.188297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.558 [2024-07-25 10:27:44.188394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.558 [2024-07-25 10:27:44.188456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.558 [2024-07-25 10:27:44.188457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.495 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:41.495 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:41.495 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:41.495 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:41.495 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.495 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.495 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:41.495 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:41.495 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.495 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.495 [2024-07-25 10:27:44.904926] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.495 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.495 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:41.495 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.495 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.495 Malloc1 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.495 [2024-07-25 10:27:45.052023] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:41.495 { 00:12:41.495 "name": "Malloc1", 00:12:41.495 "aliases": [ 00:12:41.495 "6182c18f-10ba-448a-a8c0-da40353a015e" 00:12:41.495 ], 00:12:41.495 "product_name": "Malloc disk", 00:12:41.495 "block_size": 512, 00:12:41.495 "num_blocks": 1048576, 00:12:41.495 "uuid": "6182c18f-10ba-448a-a8c0-da40353a015e", 00:12:41.495 "assigned_rate_limits": { 00:12:41.495 "rw_ios_per_sec": 0, 00:12:41.495 "rw_mbytes_per_sec": 0, 00:12:41.495 "r_mbytes_per_sec": 0, 00:12:41.495 "w_mbytes_per_sec": 0 00:12:41.495 }, 00:12:41.495 "claimed": true, 00:12:41.495 "claim_type": "exclusive_write", 00:12:41.495 "zoned": false, 00:12:41.495 "supported_io_types": { 00:12:41.495 "read": true, 00:12:41.495 "write": true, 00:12:41.495 "unmap": true, 00:12:41.495 "flush": true, 00:12:41.495 "reset": true, 00:12:41.495 "nvme_admin": false, 00:12:41.495 "nvme_io": false, 00:12:41.495 "nvme_io_md": false, 00:12:41.495 "write_zeroes": true, 00:12:41.495 "zcopy": true, 00:12:41.495 "get_zone_info": false, 00:12:41.495 "zone_management": false, 00:12:41.495 "zone_append": false, 00:12:41.495 "compare": false, 00:12:41.495 "compare_and_write": false, 00:12:41.495 "abort": true, 00:12:41.495 "seek_hole": false, 00:12:41.495 "seek_data": false, 00:12:41.495 "copy": true, 00:12:41.495 "nvme_iov_md": false 00:12:41.495 }, 00:12:41.495 "memory_domains": [ 00:12:41.495 { 00:12:41.495 "dma_device_id": "system", 00:12:41.495 "dma_device_type": 1 00:12:41.495 }, 00:12:41.495 { 00:12:41.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.495 "dma_device_type": 2 00:12:41.495 } 00:12:41.495 ], 00:12:41.495 "driver_specific": {} 00:12:41.495 } 00:12:41.495 ]' 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:41.495 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.873 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.873 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:42.873 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.873 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:42.873 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:45.403 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:45.970 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.904 ************************************ 00:12:46.904 START TEST filesystem_in_capsule_ext4 00:12:46.904 ************************************ 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:46.904 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:46.904 mke2fs 1.46.5 (30-Dec-2021) 00:12:46.904 Discarding device blocks: 0/522240 done 00:12:46.904 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:46.904 Filesystem UUID: d3f436fe-2ab2-419f-aec4-bf80d22efbda 00:12:46.904 Superblock backups stored on blocks: 00:12:46.904 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:46.904 00:12:46.904 Allocating group tables: 0/64 done 00:12:46.904 Writing inode tables: 0/64 done 00:12:48.278 Creating journal (8192 blocks): done 00:12:48.278 Writing superblocks and filesystem accounting information: 0/64 done 00:12:48.278 00:12:48.278 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:48.278 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:49.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:49.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:49.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:49.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:49.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:49.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:49.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3811327 00:12:49.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:49.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:49.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:49.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:49.249 00:12:49.249 real 0m2.515s 00:12:49.249 user 0m0.022s 00:12:49.249 sys 0m0.087s 00:12:49.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:49.249 ************************************ 00:12:49.249 END TEST filesystem_in_capsule_ext4 00:12:49.249 ************************************ 00:12:49.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:49.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:49.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:49.508 ************************************ 00:12:49.508 START TEST filesystem_in_capsule_btrfs 00:12:49.508 ************************************ 00:12:49.508 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:49.508 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:49.508 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:49.508 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:49.508 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:49.508 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:49.508 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:49.508 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:49.508 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:49.508 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:49.508 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:49.767 btrfs-progs v6.6.2 00:12:49.767 See https://btrfs.readthedocs.io for more information. 00:12:49.767 00:12:49.767 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:49.767 NOTE: several default settings have changed in version 5.15, please make sure 00:12:49.767 this does not affect your deployments: 00:12:49.767 - DUP for metadata (-m dup) 00:12:49.767 - enabled no-holes (-O no-holes) 00:12:49.767 - enabled free-space-tree (-R free-space-tree) 00:12:49.767 00:12:49.767 Label: (null) 00:12:49.767 UUID: c34148d5-a25c-4986-a963-bf5ca2c8c0fc 00:12:49.767 Node size: 16384 00:12:49.767 Sector size: 4096 00:12:49.767 Filesystem size: 510.00MiB 00:12:49.767 Block group profiles: 00:12:49.767 Data: single 8.00MiB 00:12:49.767 Metadata: DUP 32.00MiB 00:12:49.767 System: DUP 8.00MiB 00:12:49.767 SSD detected: yes 00:12:49.767 Zoned device: no 00:12:49.767 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:49.767 Runtime features: free-space-tree 00:12:49.767 Checksum: crc32c 00:12:49.767 Number of devices: 1 00:12:49.767 Devices: 00:12:49.767 ID SIZE PATH 00:12:49.767 1 510.00MiB /dev/nvme0n1p1 00:12:49.767 00:12:49.767 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:49.767 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:50.025 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:50.025 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:50.025 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:50.025 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:50.025 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:50.025 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:50.025 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3811327 00:12:50.025 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:50.025 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:50.025 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:50.025 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:50.025 00:12:50.025 real 0m0.660s 00:12:50.025 user 0m0.028s 00:12:50.025 sys 0m0.147s 00:12:50.025 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.025 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:50.025 ************************************ 00:12:50.025 END TEST filesystem_in_capsule_btrfs 00:12:50.025 ************************************ 00:12:50.283 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:50.283 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:50.283 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.283 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.283 ************************************ 00:12:50.283 START TEST filesystem_in_capsule_xfs 00:12:50.283 ************************************ 00:12:50.283 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:50.283 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:50.283 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:50.283 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:50.283 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:50.283 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:50.283 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:50.283 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:50.283 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:50.283 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:50.283 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:50.283 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:50.283 = sectsz=512 attr=2, projid32bit=1 00:12:50.283 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:50.283 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:50.283 data = bsize=4096 blocks=130560, imaxpct=25 00:12:50.283 = sunit=0 swidth=0 blks 00:12:50.283 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:50.283 log =internal log bsize=4096 blocks=16384, version=2 00:12:50.283 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:50.283 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:51.217 Discarding blocks...Done. 00:12:51.217 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:51.217 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:53.116 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:53.116 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:53.116 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:53.116 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:53.116 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:53.116 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:53.116 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3811327 00:12:53.116 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:53.116 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:53.116 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:53.116 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:53.116 00:12:53.116 real 0m2.714s 00:12:53.116 user 0m0.034s 00:12:53.116 sys 0m0.079s 00:12:53.116 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.116 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:53.116 ************************************ 00:12:53.116 END TEST filesystem_in_capsule_xfs 00:12:53.116 ************************************ 00:12:53.116 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3811327 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3811327 ']' 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3811327 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.375 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3811327 00:12:53.375 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:53.375 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:53.375 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3811327' 00:12:53.375 killing process with pid 3811327 00:12:53.376 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3811327 00:12:53.376 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3811327 00:12:53.944 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:53.944 00:12:53.944 real 0m13.386s 00:12:53.944 user 0m52.262s 00:12:53.944 sys 0m1.843s 00:12:53.944 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.944 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.944 ************************************ 00:12:53.944 END TEST nvmf_filesystem_in_capsule 00:12:53.944 ************************************ 00:12:53.944 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:53.944 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:53.944 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:53.944 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:53.944 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:53.944 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:53.944 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:53.944 rmmod nvme_tcp 00:12:53.944 rmmod nvme_fabrics 00:12:53.944 rmmod nvme_keyring 00:12:53.944 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:53.944 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:53.944 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:53.944 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:53.945 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:53.945 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:53.945 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:53.945 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.945 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:53.945 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.945 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.945 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:56.484 00:12:56.484 real 0m35.880s 00:12:56.484 user 1m45.819s 00:12:56.484 sys 0m8.897s 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:56.484 ************************************ 00:12:56.484 END TEST nvmf_filesystem 00:12:56.484 ************************************ 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:56.484 ************************************ 00:12:56.484 START TEST nvmf_target_discovery 00:12:56.484 ************************************ 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:56.484 * Looking for test storage... 00:12:56.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:56.484 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:56.485 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:03.050 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:03.050 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.050 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:03.051 Found net devices under 0000:af:00.0: cvl_0_0 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:03.051 Found net devices under 0000:af:00.1: cvl_0_1 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:03.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:13:03.051 00:13:03.051 --- 10.0.0.2 ping statistics --- 00:13:03.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.051 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:13:03.051 00:13:03.051 --- 10.0.0.1 ping statistics --- 00:13:03.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.051 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3817330 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3817330 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3817330 ']' 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:03.051 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:03.051 [2024-07-25 10:28:06.661736] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:13:03.051 [2024-07-25 10:28:06.661786] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.051 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.051 [2024-07-25 10:28:06.735150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.310 [2024-07-25 10:28:06.809916] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.310 [2024-07-25 10:28:06.809952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.310 [2024-07-25 10:28:06.809961] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.310 [2024-07-25 10:28:06.809970] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.310 [2024-07-25 10:28:06.809976] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.310 [2024-07-25 10:28:06.810027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.310 [2024-07-25 10:28:06.810125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.310 [2024-07-25 10:28:06.810208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.310 [2024-07-25 10:28:06.810209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.876 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.876 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:13:03.876 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:03.877 [2024-07-25 10:28:07.526930] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:03.877 Null1 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:03.877 [2024-07-25 10:28:07.575213] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.877 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.135 Null2 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.135 Null3 00:13:04.135 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.136 Null4 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:13:04.136 00:13:04.136 Discovery Log Number of Records 6, Generation counter 6 00:13:04.136 =====Discovery Log Entry 0====== 00:13:04.136 trtype: tcp 00:13:04.136 adrfam: ipv4 00:13:04.136 subtype: current discovery subsystem 00:13:04.136 treq: not required 00:13:04.136 portid: 0 00:13:04.136 trsvcid: 4420 00:13:04.136 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:04.136 traddr: 10.0.0.2 00:13:04.136 eflags: explicit discovery connections, duplicate discovery information 00:13:04.136 sectype: none 00:13:04.136 =====Discovery Log Entry 1====== 00:13:04.136 trtype: tcp 00:13:04.136 adrfam: ipv4 00:13:04.136 subtype: nvme subsystem 00:13:04.136 treq: not required 00:13:04.136 portid: 0 00:13:04.136 trsvcid: 4420 00:13:04.136 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:04.136 traddr: 10.0.0.2 00:13:04.136 eflags: none 00:13:04.136 sectype: none 00:13:04.136 =====Discovery Log Entry 2====== 00:13:04.136 trtype: tcp 00:13:04.136 adrfam: ipv4 00:13:04.136 subtype: nvme subsystem 00:13:04.136 treq: not required 00:13:04.136 portid: 0 00:13:04.136 trsvcid: 4420 00:13:04.136 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:04.136 traddr: 10.0.0.2 00:13:04.136 eflags: none 00:13:04.136 sectype: none 00:13:04.136 =====Discovery Log Entry 3====== 00:13:04.136 trtype: tcp 00:13:04.136 adrfam: ipv4 00:13:04.136 subtype: nvme subsystem 00:13:04.136 treq: not required 00:13:04.136 portid: 0 00:13:04.136 trsvcid: 4420 00:13:04.136 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:04.136 traddr: 10.0.0.2 00:13:04.136 eflags: none 00:13:04.136 sectype: none 00:13:04.136 =====Discovery Log Entry 4====== 00:13:04.136 trtype: tcp 00:13:04.136 adrfam: ipv4 00:13:04.136 subtype: nvme subsystem 00:13:04.136 treq: not required 00:13:04.136 portid: 0 00:13:04.136 trsvcid: 4420 00:13:04.136 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:04.136 traddr: 10.0.0.2 00:13:04.136 eflags: none 00:13:04.136 sectype: none 00:13:04.136 =====Discovery Log Entry 5====== 00:13:04.136 trtype: tcp 00:13:04.136 adrfam: ipv4 00:13:04.136 subtype: discovery subsystem referral 00:13:04.136 treq: not required 00:13:04.136 portid: 0 00:13:04.136 trsvcid: 4430 00:13:04.136 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:04.136 traddr: 10.0.0.2 00:13:04.136 eflags: none 00:13:04.136 sectype: none 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:04.136 Perform nvmf subsystem discovery via RPC 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.136 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.136 [ 00:13:04.136 { 00:13:04.136 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:04.136 "subtype": "Discovery", 00:13:04.136 "listen_addresses": [ 00:13:04.136 { 00:13:04.136 "trtype": "TCP", 00:13:04.136 "adrfam": "IPv4", 00:13:04.136 "traddr": "10.0.0.2", 00:13:04.136 "trsvcid": "4420" 00:13:04.136 } 00:13:04.136 ], 00:13:04.136 "allow_any_host": true, 00:13:04.136 "hosts": [] 00:13:04.136 }, 00:13:04.136 { 00:13:04.136 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:04.136 "subtype": "NVMe", 00:13:04.136 "listen_addresses": [ 00:13:04.136 { 00:13:04.136 "trtype": "TCP", 00:13:04.136 "adrfam": "IPv4", 00:13:04.136 "traddr": "10.0.0.2", 00:13:04.136 "trsvcid": "4420" 00:13:04.136 } 00:13:04.136 ], 00:13:04.136 "allow_any_host": true, 00:13:04.136 "hosts": [], 00:13:04.136 "serial_number": "SPDK00000000000001", 00:13:04.136 "model_number": "SPDK bdev Controller", 00:13:04.136 "max_namespaces": 32, 00:13:04.136 "min_cntlid": 1, 00:13:04.136 "max_cntlid": 65519, 00:13:04.136 "namespaces": [ 00:13:04.136 { 00:13:04.136 "nsid": 1, 00:13:04.136 "bdev_name": "Null1", 00:13:04.136 "name": "Null1", 00:13:04.396 "nguid": "24A3CBD71935467FA27399910EC09ED8", 00:13:04.396 "uuid": "24a3cbd7-1935-467f-a273-99910ec09ed8" 00:13:04.396 } 00:13:04.396 ] 00:13:04.396 }, 00:13:04.396 { 00:13:04.396 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:04.396 "subtype": "NVMe", 00:13:04.396 "listen_addresses": [ 00:13:04.396 { 00:13:04.396 "trtype": "TCP", 00:13:04.396 "adrfam": "IPv4", 00:13:04.396 "traddr": "10.0.0.2", 00:13:04.396 "trsvcid": "4420" 00:13:04.396 } 00:13:04.396 ], 00:13:04.396 "allow_any_host": true, 00:13:04.396 "hosts": [], 00:13:04.396 "serial_number": "SPDK00000000000002", 00:13:04.396 "model_number": "SPDK bdev Controller", 00:13:04.396 "max_namespaces": 32, 00:13:04.396 "min_cntlid": 1, 00:13:04.396 "max_cntlid": 65519, 00:13:04.396 "namespaces": [ 00:13:04.396 { 00:13:04.396 "nsid": 1, 00:13:04.396 "bdev_name": "Null2", 00:13:04.396 "name": "Null2", 00:13:04.396 "nguid": "3B3AC585AB044E51ABC8BE200AEA88EE", 00:13:04.396 "uuid": "3b3ac585-ab04-4e51-abc8-be200aea88ee" 00:13:04.396 } 00:13:04.396 ] 00:13:04.396 }, 00:13:04.396 { 00:13:04.396 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:04.396 "subtype": "NVMe", 00:13:04.396 "listen_addresses": [ 00:13:04.396 { 00:13:04.396 "trtype": "TCP", 00:13:04.396 "adrfam": "IPv4", 00:13:04.396 "traddr": "10.0.0.2", 00:13:04.396 "trsvcid": "4420" 00:13:04.396 } 00:13:04.396 ], 00:13:04.396 "allow_any_host": true, 00:13:04.396 "hosts": [], 00:13:04.396 "serial_number": "SPDK00000000000003", 00:13:04.396 "model_number": "SPDK bdev Controller", 00:13:04.396 "max_namespaces": 32, 00:13:04.396 "min_cntlid": 1, 00:13:04.396 "max_cntlid": 65519, 00:13:04.396 "namespaces": [ 00:13:04.396 { 00:13:04.396 "nsid": 1, 00:13:04.396 "bdev_name": "Null3", 00:13:04.396 "name": "Null3", 00:13:04.396 "nguid": "D68E915EEF2E4F2B9E8CE22B5937CC4B", 00:13:04.396 "uuid": "d68e915e-ef2e-4f2b-9e8c-e22b5937cc4b" 00:13:04.396 } 00:13:04.396 ] 00:13:04.396 }, 00:13:04.396 { 00:13:04.396 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:04.396 "subtype": "NVMe", 00:13:04.396 "listen_addresses": [ 00:13:04.396 { 00:13:04.396 "trtype": "TCP", 00:13:04.396 "adrfam": "IPv4", 00:13:04.396 "traddr": "10.0.0.2", 00:13:04.396 "trsvcid": "4420" 00:13:04.396 } 00:13:04.396 ], 00:13:04.396 "allow_any_host": true, 00:13:04.396 "hosts": [], 00:13:04.396 "serial_number": "SPDK00000000000004", 00:13:04.396 "model_number": "SPDK bdev Controller", 00:13:04.396 "max_namespaces": 32, 00:13:04.396 "min_cntlid": 1, 00:13:04.396 "max_cntlid": 65519, 00:13:04.396 "namespaces": [ 00:13:04.396 { 00:13:04.396 "nsid": 1, 00:13:04.396 "bdev_name": "Null4", 00:13:04.396 "name": "Null4", 00:13:04.396 "nguid": "07C15FB8E3D7416583F64B90081F9EFA", 00:13:04.396 "uuid": "07c15fb8-e3d7-4165-83f6-4b90081f9efa" 00:13:04.396 } 00:13:04.396 ] 00:13:04.396 } 00:13:04.396 ] 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.396 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.396 rmmod nvme_tcp 00:13:04.396 rmmod nvme_fabrics 00:13:04.396 rmmod nvme_keyring 00:13:04.396 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.396 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:13:04.397 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:13:04.397 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3817330 ']' 00:13:04.397 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3817330 00:13:04.397 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3817330 ']' 00:13:04.397 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3817330 00:13:04.397 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:13:04.397 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:04.397 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3817330 00:13:04.656 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:04.656 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:04.656 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3817330' 00:13:04.656 killing process with pid 3817330 00:13:04.656 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3817330 00:13:04.656 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3817330 00:13:04.656 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:04.656 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:04.656 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:04.656 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.656 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:04.656 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.656 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.656 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.192 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:07.192 00:13:07.192 real 0m10.721s 00:13:07.192 user 0m7.947s 00:13:07.192 sys 0m5.607s 00:13:07.192 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.192 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:07.192 ************************************ 00:13:07.192 END TEST nvmf_target_discovery 00:13:07.192 ************************************ 00:13:07.192 10:28:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:07.192 10:28:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:07.193 ************************************ 00:13:07.193 START TEST nvmf_referrals 00:13:07.193 ************************************ 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:07.193 * Looking for test storage... 00:13:07.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:13:07.193 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:13.759 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:13.759 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.759 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:13.759 Found net devices under 0000:af:00.0: cvl_0_0 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:13.760 Found net devices under 0000:af:00.1: cvl_0_1 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:13.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:13:13.760 00:13:13.760 --- 10.0.0.2 ping statistics --- 00:13:13.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.760 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:13:13.760 00:13:13.760 --- 10.0.0.1 ping statistics --- 00:13:13.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.760 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3821291 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3821291 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3821291 ']' 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:13.760 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.760 [2024-07-25 10:28:17.047211] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:13:13.760 [2024-07-25 10:28:17.047260] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.760 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.760 [2024-07-25 10:28:17.119251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:13.760 [2024-07-25 10:28:17.193399] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.760 [2024-07-25 10:28:17.193438] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.760 [2024-07-25 10:28:17.193447] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.760 [2024-07-25 10:28:17.193456] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.760 [2024-07-25 10:28:17.193462] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.760 [2024-07-25 10:28:17.193515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.760 [2024-07-25 10:28:17.193612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.760 [2024-07-25 10:28:17.193672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.760 [2024-07-25 10:28:17.193674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.327 [2024-07-25 10:28:17.915152] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.327 [2024-07-25 10:28:17.931333] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.327 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.327 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:14.327 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:14.327 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:14.327 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:14.327 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:14.327 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.327 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.327 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:14.327 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.586 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:14.845 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:15.103 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:15.103 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:15.103 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:15.103 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:15.103 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:15.103 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:15.103 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:15.103 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:15.103 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:15.103 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:15.362 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:15.362 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:15.362 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:15.362 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:15.362 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:15.362 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:15.362 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:15.362 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:15.362 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.362 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.362 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.362 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:15.362 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:15.362 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:15.362 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:15.362 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.362 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.362 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:15.362 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.620 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:15.620 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:15.620 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:15.620 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:15.620 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:15.620 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:15.620 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:15.620 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:15.620 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:15.620 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:15.620 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:15.620 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:15.620 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:15.620 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:15.620 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:15.879 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:16.137 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:16.137 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:16.137 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:16.138 rmmod nvme_tcp 00:13:16.138 rmmod nvme_fabrics 00:13:16.138 rmmod nvme_keyring 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3821291 ']' 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3821291 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3821291 ']' 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3821291 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3821291 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3821291' 00:13:16.138 killing process with pid 3821291 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3821291 00:13:16.138 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3821291 00:13:16.397 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:16.397 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:16.397 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:16.397 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:16.397 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:16.397 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.397 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.397 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.932 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:18.932 00:13:18.932 real 0m11.617s 00:13:18.932 user 0m13.601s 00:13:18.932 sys 0m5.683s 00:13:18.932 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:18.932 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:18.932 ************************************ 00:13:18.932 END TEST nvmf_referrals 00:13:18.932 ************************************ 00:13:18.932 10:28:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:18.932 10:28:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:18.932 10:28:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:18.933 ************************************ 00:13:18.933 START TEST nvmf_connect_disconnect 00:13:18.933 ************************************ 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:18.933 * Looking for test storage... 00:13:18.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:13:18.933 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.495 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:25.496 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:25.496 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:25.496 Found net devices under 0000:af:00.0: cvl_0_0 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:25.496 Found net devices under 0000:af:00.1: cvl_0_1 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:25.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:13:25.496 00:13:25.496 --- 10.0.0.2 ping statistics --- 00:13:25.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.496 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:25.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:13:25.496 00:13:25.496 --- 10.0.0.1 ping statistics --- 00:13:25.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.496 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.496 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3825464 00:13:25.497 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:25.497 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3825464 00:13:25.497 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3825464 ']' 00:13:25.497 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.497 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:25.497 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.497 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:25.497 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.497 [2024-07-25 10:28:28.976466] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:13:25.497 [2024-07-25 10:28:28.976513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.497 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.497 [2024-07-25 10:28:29.050696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:25.497 [2024-07-25 10:28:29.122700] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.497 [2024-07-25 10:28:29.122744] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.497 [2024-07-25 10:28:29.122754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.497 [2024-07-25 10:28:29.122763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.497 [2024-07-25 10:28:29.122771] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.497 [2024-07-25 10:28:29.122824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.497 [2024-07-25 10:28:29.122933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.497 [2024-07-25 10:28:29.123018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.497 [2024-07-25 10:28:29.123020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.430 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:26.430 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:13:26.430 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:26.430 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:26.430 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:26.430 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.430 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:26.430 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.430 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:26.430 [2024-07-25 10:28:29.833077] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:26.431 [2024-07-25 10:28:29.887553] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:26.431 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:29.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.889 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:43.889 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:43.889 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.889 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:43.889 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:43.890 rmmod nvme_tcp 00:13:43.890 rmmod nvme_fabrics 00:13:43.890 rmmod nvme_keyring 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3825464 ']' 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3825464 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3825464 ']' 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3825464 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3825464 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3825464' 00:13:43.890 killing process with pid 3825464 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3825464 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3825464 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.890 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.792 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:45.792 00:13:45.792 real 0m27.241s 00:13:45.792 user 1m13.507s 00:13:45.792 sys 0m6.971s 00:13:45.792 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.792 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:45.792 ************************************ 00:13:45.792 END TEST nvmf_connect_disconnect 00:13:45.792 ************************************ 00:13:45.792 10:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:45.792 10:28:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:45.792 10:28:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:45.792 10:28:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:45.792 ************************************ 00:13:45.792 START TEST nvmf_multitarget 00:13:45.792 ************************************ 00:13:45.792 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:46.051 * Looking for test storage... 00:13:46.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.051 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:46.052 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:52.655 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.655 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:52.655 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:52.655 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:52.655 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:52.655 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:52.655 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:52.655 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:52.655 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:52.655 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:52.655 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:52.655 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:52.655 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:52.656 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:52.656 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:52.656 Found net devices under 0000:af:00.0: cvl_0_0 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:52.656 Found net devices under 0000:af:00.1: cvl_0_1 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:52.656 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:52.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:13:52.656 00:13:52.656 --- 10.0.0.2 ping statistics --- 00:13:52.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.656 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:13:52.656 00:13:52.656 --- 10.0.0.1 ping statistics --- 00:13:52.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.656 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3832270 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3832270 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3832270 ']' 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.656 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:52.657 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.657 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:52.657 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:52.657 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:52.657 [2024-07-25 10:28:56.146232] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:13:52.657 [2024-07-25 10:28:56.146283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.657 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.657 [2024-07-25 10:28:56.218611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.657 [2024-07-25 10:28:56.291920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.657 [2024-07-25 10:28:56.291962] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.657 [2024-07-25 10:28:56.291972] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.657 [2024-07-25 10:28:56.291980] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.657 [2024-07-25 10:28:56.291987] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.657 [2024-07-25 10:28:56.292079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.657 [2024-07-25 10:28:56.292174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.657 [2024-07-25 10:28:56.292258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.657 [2024-07-25 10:28:56.292260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.592 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:53.592 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:13:53.592 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.592 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:53.592 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:53.592 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.592 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:53.592 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:53.592 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:53.592 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:53.592 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:53.592 "nvmf_tgt_1" 00:13:53.592 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:53.851 "nvmf_tgt_2" 00:13:53.851 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:53.851 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:53.851 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:53.851 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:53.851 true 00:13:53.851 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:54.109 true 00:13:54.109 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:54.109 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:54.109 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:54.109 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:54.109 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:54.109 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:54.109 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:54.109 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:54.109 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:54.109 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:54.109 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:54.109 rmmod nvme_tcp 00:13:54.109 rmmod nvme_fabrics 00:13:54.110 rmmod nvme_keyring 00:13:54.110 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:54.110 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:54.110 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:54.110 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3832270 ']' 00:13:54.110 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3832270 00:13:54.110 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3832270 ']' 00:13:54.110 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3832270 00:13:54.369 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:13:54.369 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:54.369 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3832270 00:13:54.369 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:54.369 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:54.369 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3832270' 00:13:54.369 killing process with pid 3832270 00:13:54.369 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3832270 00:13:54.369 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3832270 00:13:54.369 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.369 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:54.369 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:54.369 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.369 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:54.369 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.369 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.369 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:56.901 00:13:56.901 real 0m10.672s 00:13:56.901 user 0m9.458s 00:13:56.901 sys 0m5.495s 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:56.901 ************************************ 00:13:56.901 END TEST nvmf_multitarget 00:13:56.901 ************************************ 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:56.901 ************************************ 00:13:56.901 START TEST nvmf_rpc 00:13:56.901 ************************************ 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:56.901 * Looking for test storage... 00:13:56.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.901 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:56.902 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:03.467 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:03.467 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:03.467 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:03.468 Found net devices under 0000:af:00.0: cvl_0_0 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:03.468 Found net devices under 0000:af:00.1: cvl_0_1 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.468 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.468 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.468 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:03.468 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.468 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.468 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.468 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:03.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:14:03.468 00:14:03.468 --- 10.0.0.2 ping statistics --- 00:14:03.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.468 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:14:03.468 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:14:03.468 00:14:03.468 --- 10.0.0.1 ping statistics --- 00:14:03.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.468 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:14:03.468 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.468 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:03.468 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:03.468 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.468 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:03.468 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3836248 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3836248 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3836248 ']' 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:03.727 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.727 [2024-07-25 10:29:07.259777] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:14:03.727 [2024-07-25 10:29:07.259819] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.727 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.727 [2024-07-25 10:29:07.333734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.727 [2024-07-25 10:29:07.404210] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.727 [2024-07-25 10:29:07.404249] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.727 [2024-07-25 10:29:07.404259] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.727 [2024-07-25 10:29:07.404268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.727 [2024-07-25 10:29:07.404275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.727 [2024-07-25 10:29:07.404330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.727 [2024-07-25 10:29:07.404424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.727 [2024-07-25 10:29:07.404508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.727 [2024-07-25 10:29:07.404510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.661 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:04.661 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:04.661 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:04.661 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:04.661 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.661 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.661 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:04.661 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.661 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.661 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.661 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:04.661 "tick_rate": 2500000000, 00:14:04.661 "poll_groups": [ 00:14:04.661 { 00:14:04.661 "name": "nvmf_tgt_poll_group_000", 00:14:04.662 "admin_qpairs": 0, 00:14:04.662 "io_qpairs": 0, 00:14:04.662 "current_admin_qpairs": 0, 00:14:04.662 "current_io_qpairs": 0, 00:14:04.662 "pending_bdev_io": 0, 00:14:04.662 "completed_nvme_io": 0, 00:14:04.662 "transports": [] 00:14:04.662 }, 00:14:04.662 { 00:14:04.662 "name": "nvmf_tgt_poll_group_001", 00:14:04.662 "admin_qpairs": 0, 00:14:04.662 "io_qpairs": 0, 00:14:04.662 "current_admin_qpairs": 0, 00:14:04.662 "current_io_qpairs": 0, 00:14:04.662 "pending_bdev_io": 0, 00:14:04.662 "completed_nvme_io": 0, 00:14:04.662 "transports": [] 00:14:04.662 }, 00:14:04.662 { 00:14:04.662 "name": "nvmf_tgt_poll_group_002", 00:14:04.662 "admin_qpairs": 0, 00:14:04.662 "io_qpairs": 0, 00:14:04.662 "current_admin_qpairs": 0, 00:14:04.662 "current_io_qpairs": 0, 00:14:04.662 "pending_bdev_io": 0, 00:14:04.662 "completed_nvme_io": 0, 00:14:04.662 "transports": [] 00:14:04.662 }, 00:14:04.662 { 00:14:04.662 "name": "nvmf_tgt_poll_group_003", 00:14:04.662 "admin_qpairs": 0, 00:14:04.662 "io_qpairs": 0, 00:14:04.662 "current_admin_qpairs": 0, 00:14:04.662 "current_io_qpairs": 0, 00:14:04.662 "pending_bdev_io": 0, 00:14:04.662 "completed_nvme_io": 0, 00:14:04.662 "transports": [] 00:14:04.662 } 00:14:04.662 ] 00:14:04.662 }' 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.662 [2024-07-25 10:29:08.239397] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:04.662 "tick_rate": 2500000000, 00:14:04.662 "poll_groups": [ 00:14:04.662 { 00:14:04.662 "name": "nvmf_tgt_poll_group_000", 00:14:04.662 "admin_qpairs": 0, 00:14:04.662 "io_qpairs": 0, 00:14:04.662 "current_admin_qpairs": 0, 00:14:04.662 "current_io_qpairs": 0, 00:14:04.662 "pending_bdev_io": 0, 00:14:04.662 "completed_nvme_io": 0, 00:14:04.662 "transports": [ 00:14:04.662 { 00:14:04.662 "trtype": "TCP" 00:14:04.662 } 00:14:04.662 ] 00:14:04.662 }, 00:14:04.662 { 00:14:04.662 "name": "nvmf_tgt_poll_group_001", 00:14:04.662 "admin_qpairs": 0, 00:14:04.662 "io_qpairs": 0, 00:14:04.662 "current_admin_qpairs": 0, 00:14:04.662 "current_io_qpairs": 0, 00:14:04.662 "pending_bdev_io": 0, 00:14:04.662 "completed_nvme_io": 0, 00:14:04.662 "transports": [ 00:14:04.662 { 00:14:04.662 "trtype": "TCP" 00:14:04.662 } 00:14:04.662 ] 00:14:04.662 }, 00:14:04.662 { 00:14:04.662 "name": "nvmf_tgt_poll_group_002", 00:14:04.662 "admin_qpairs": 0, 00:14:04.662 "io_qpairs": 0, 00:14:04.662 "current_admin_qpairs": 0, 00:14:04.662 "current_io_qpairs": 0, 00:14:04.662 "pending_bdev_io": 0, 00:14:04.662 "completed_nvme_io": 0, 00:14:04.662 "transports": [ 00:14:04.662 { 00:14:04.662 "trtype": "TCP" 00:14:04.662 } 00:14:04.662 ] 00:14:04.662 }, 00:14:04.662 { 00:14:04.662 "name": "nvmf_tgt_poll_group_003", 00:14:04.662 "admin_qpairs": 0, 00:14:04.662 "io_qpairs": 0, 00:14:04.662 "current_admin_qpairs": 0, 00:14:04.662 "current_io_qpairs": 0, 00:14:04.662 "pending_bdev_io": 0, 00:14:04.662 "completed_nvme_io": 0, 00:14:04.662 "transports": [ 00:14:04.662 { 00:14:04.662 "trtype": "TCP" 00:14:04.662 } 00:14:04.662 ] 00:14:04.662 } 00:14:04.662 ] 00:14:04.662 }' 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:04.662 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.921 Malloc1 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.921 [2024-07-25 10:29:08.422595] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:14:04.921 [2024-07-25 10:29:08.457212] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:14:04.921 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:04.921 could not add new controller: failed to write to nvme-fabrics device 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:04.921 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:04.922 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:04.922 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:04.922 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.922 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.922 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.922 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:06.297 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:06.297 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:06.297 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.297 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:06.297 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:08.200 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:08.200 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:08.200 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:08.200 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:08.200 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:08.200 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:08.200 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:08.459 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:08.459 [2024-07-25 10:29:11.984394] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:14:08.459 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:08.459 could not add new controller: failed to write to nvme-fabrics device 00:14:08.459 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:08.459 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:08.459 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:08.459 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:08.459 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:08.459 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.459 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.459 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.459 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:09.834 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:09.834 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:09.834 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.834 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:09.834 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:11.738 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:11.738 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:11.738 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.738 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:11.738 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.738 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:11.738 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.738 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:11.738 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:11.738 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:11.738 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.996 [2024-07-25 10:29:15.502687] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.996 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:13.373 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:13.373 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:13.373 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:13.373 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:13.373 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:15.273 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:15.273 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:15.273 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:15.273 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:15.273 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:15.273 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:15.273 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.532 [2024-07-25 10:29:19.081122] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.532 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:16.910 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:16.910 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:16.910 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.910 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:16.910 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:18.814 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:18.814 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:18.814 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.814 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:18.814 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.814 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:18.814 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.814 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:18.814 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:18.814 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:18.814 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.073 [2024-07-25 10:29:22.572317] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:19.073 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.074 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.074 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.074 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:20.449 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:20.449 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:20.449 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.449 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:20.449 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:22.352 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:22.352 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:22.352 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:22.352 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:22.352 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:22.352 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:22.352 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.352 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:22.352 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:22.352 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:22.352 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.352 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.611 [2024-07-25 10:29:26.105592] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.611 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.986 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:23.986 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:23.986 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.986 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:23.986 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.882 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.140 [2024-07-25 10:29:29.602682] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.140 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:27.566 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:27.566 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:27.566 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.566 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:27.566 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:29.489 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:29.489 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:29.489 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.489 [2024-07-25 10:29:33.157581] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.489 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.747 [2024-07-25 10:29:33.205694] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.747 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 [2024-07-25 10:29:33.257869] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 [2024-07-25 10:29:33.306026] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 [2024-07-25 10:29:33.354185] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.748 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:29.748 "tick_rate": 2500000000, 00:14:29.748 "poll_groups": [ 00:14:29.748 { 00:14:29.748 "name": "nvmf_tgt_poll_group_000", 00:14:29.748 "admin_qpairs": 2, 00:14:29.748 "io_qpairs": 196, 00:14:29.748 "current_admin_qpairs": 0, 00:14:29.748 "current_io_qpairs": 0, 00:14:29.748 "pending_bdev_io": 0, 00:14:29.748 "completed_nvme_io": 296, 00:14:29.748 "transports": [ 00:14:29.748 { 00:14:29.748 "trtype": "TCP" 00:14:29.748 } 00:14:29.748 ] 00:14:29.748 }, 00:14:29.748 { 00:14:29.748 "name": "nvmf_tgt_poll_group_001", 00:14:29.748 "admin_qpairs": 2, 00:14:29.748 "io_qpairs": 196, 00:14:29.748 "current_admin_qpairs": 0, 00:14:29.748 "current_io_qpairs": 0, 00:14:29.748 "pending_bdev_io": 0, 00:14:29.748 "completed_nvme_io": 246, 00:14:29.748 "transports": [ 00:14:29.748 { 00:14:29.748 "trtype": "TCP" 00:14:29.748 } 00:14:29.748 ] 00:14:29.748 }, 00:14:29.748 { 00:14:29.748 "name": "nvmf_tgt_poll_group_002", 00:14:29.748 "admin_qpairs": 1, 00:14:29.748 "io_qpairs": 196, 00:14:29.748 "current_admin_qpairs": 0, 00:14:29.748 "current_io_qpairs": 0, 00:14:29.748 "pending_bdev_io": 0, 00:14:29.748 "completed_nvme_io": 296, 00:14:29.748 "transports": [ 00:14:29.748 { 00:14:29.748 "trtype": "TCP" 00:14:29.748 } 00:14:29.748 ] 00:14:29.748 }, 00:14:29.748 { 00:14:29.748 "name": "nvmf_tgt_poll_group_003", 00:14:29.748 "admin_qpairs": 2, 00:14:29.748 "io_qpairs": 196, 00:14:29.748 "current_admin_qpairs": 0, 00:14:29.748 "current_io_qpairs": 0, 00:14:29.748 "pending_bdev_io": 0, 00:14:29.749 "completed_nvme_io": 296, 00:14:29.749 "transports": [ 00:14:29.749 { 00:14:29.749 "trtype": "TCP" 00:14:29.749 } 00:14:29.749 ] 00:14:29.749 } 00:14:29.749 ] 00:14:29.749 }' 00:14:29.749 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:29.749 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:29.749 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:29.749 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.008 rmmod nvme_tcp 00:14:30.008 rmmod nvme_fabrics 00:14:30.008 rmmod nvme_keyring 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3836248 ']' 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3836248 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3836248 ']' 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3836248 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3836248 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3836248' 00:14:30.008 killing process with pid 3836248 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3836248 00:14:30.008 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3836248 00:14:30.266 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.266 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:30.266 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:30.266 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.266 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:30.266 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.266 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.266 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.800 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:32.800 00:14:32.800 real 0m35.695s 00:14:32.800 user 1m46.362s 00:14:32.800 sys 0m8.228s 00:14:32.800 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.800 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.800 ************************************ 00:14:32.800 END TEST nvmf_rpc 00:14:32.800 ************************************ 00:14:32.800 10:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:32.800 10:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:32.800 10:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.800 10:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:32.800 ************************************ 00:14:32.800 START TEST nvmf_invalid 00:14:32.800 ************************************ 00:14:32.800 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:32.800 * Looking for test storage... 00:14:32.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:32.800 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.801 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:32.801 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:32.801 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:32.801 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.801 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.801 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.801 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:32.801 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:32.801 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:32.801 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:39.366 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:39.366 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:39.366 Found net devices under 0000:af:00.0: cvl_0_0 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:39.366 Found net devices under 0000:af:00.1: cvl_0_1 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.366 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:39.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:14:39.367 00:14:39.367 --- 10.0.0.2 ping statistics --- 00:14:39.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.367 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:39.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:14:39.367 00:14:39.367 --- 10.0.0.1 ping statistics --- 00:14:39.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.367 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3844563 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3844563 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3844563 ']' 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.367 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:39.367 [2024-07-25 10:29:42.968900] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:14:39.367 [2024-07-25 10:29:42.968948] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.367 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.367 [2024-07-25 10:29:43.040824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:39.627 [2024-07-25 10:29:43.115777] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.627 [2024-07-25 10:29:43.115814] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.627 [2024-07-25 10:29:43.115823] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.627 [2024-07-25 10:29:43.115831] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.627 [2024-07-25 10:29:43.115838] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.627 [2024-07-25 10:29:43.115882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.627 [2024-07-25 10:29:43.115977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.627 [2024-07-25 10:29:43.116060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:39.627 [2024-07-25 10:29:43.116061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.191 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.191 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:14:40.191 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.191 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:40.191 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:40.191 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.191 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:40.191 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25770 00:14:40.448 [2024-07-25 10:29:43.983369] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:40.448 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:40.448 { 00:14:40.448 "nqn": "nqn.2016-06.io.spdk:cnode25770", 00:14:40.448 "tgt_name": "foobar", 00:14:40.448 "method": "nvmf_create_subsystem", 00:14:40.448 "req_id": 1 00:14:40.448 } 00:14:40.448 Got JSON-RPC error response 00:14:40.449 response: 00:14:40.449 { 00:14:40.449 "code": -32603, 00:14:40.449 "message": "Unable to find target foobar" 00:14:40.449 }' 00:14:40.449 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:40.449 { 00:14:40.449 "nqn": "nqn.2016-06.io.spdk:cnode25770", 00:14:40.449 "tgt_name": "foobar", 00:14:40.449 "method": "nvmf_create_subsystem", 00:14:40.449 "req_id": 1 00:14:40.449 } 00:14:40.449 Got JSON-RPC error response 00:14:40.449 response: 00:14:40.449 { 00:14:40.449 "code": -32603, 00:14:40.449 "message": "Unable to find target foobar" 00:14:40.449 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:40.449 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:40.449 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19368 00:14:40.706 [2024-07-25 10:29:44.180099] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19368: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:40.706 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:40.706 { 00:14:40.706 "nqn": "nqn.2016-06.io.spdk:cnode19368", 00:14:40.706 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:40.706 "method": "nvmf_create_subsystem", 00:14:40.706 "req_id": 1 00:14:40.706 } 00:14:40.706 Got JSON-RPC error response 00:14:40.706 response: 00:14:40.706 { 00:14:40.706 "code": -32602, 00:14:40.706 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:40.706 }' 00:14:40.706 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:40.706 { 00:14:40.706 "nqn": "nqn.2016-06.io.spdk:cnode19368", 00:14:40.706 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:40.706 "method": "nvmf_create_subsystem", 00:14:40.706 "req_id": 1 00:14:40.706 } 00:14:40.706 Got JSON-RPC error response 00:14:40.706 response: 00:14:40.706 { 00:14:40.706 "code": -32602, 00:14:40.706 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:40.706 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:40.706 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:40.706 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25409 00:14:40.706 [2024-07-25 10:29:44.376695] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25409: invalid model number 'SPDK_Controller' 00:14:40.706 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:40.706 { 00:14:40.706 "nqn": "nqn.2016-06.io.spdk:cnode25409", 00:14:40.706 "model_number": "SPDK_Controller\u001f", 00:14:40.706 "method": "nvmf_create_subsystem", 00:14:40.706 "req_id": 1 00:14:40.706 } 00:14:40.706 Got JSON-RPC error response 00:14:40.706 response: 00:14:40.706 { 00:14:40.706 "code": -32602, 00:14:40.706 "message": "Invalid MN SPDK_Controller\u001f" 00:14:40.706 }' 00:14:40.706 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:40.706 { 00:14:40.706 "nqn": "nqn.2016-06.io.spdk:cnode25409", 00:14:40.706 "model_number": "SPDK_Controller\u001f", 00:14:40.706 "method": "nvmf_create_subsystem", 00:14:40.706 "req_id": 1 00:14:40.706 } 00:14:40.706 Got JSON-RPC error response 00:14:40.706 response: 00:14:40.706 { 00:14:40.706 "code": -32602, 00:14:40.706 "message": "Invalid MN SPDK_Controller\u001f" 00:14:40.706 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.965 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '^:jT#a9+1+,(Vzp is;RR' 00:14:40.966 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '^:jT#a9+1+,(Vzp is;RR' nqn.2016-06.io.spdk:cnode20262 00:14:41.225 [2024-07-25 10:29:44.729851] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20262: invalid serial number '^:jT#a9+1+,(Vzp is;RR' 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:41.225 { 00:14:41.225 "nqn": "nqn.2016-06.io.spdk:cnode20262", 00:14:41.225 "serial_number": "^:jT#a9+1+,(Vzp is;RR", 00:14:41.225 "method": "nvmf_create_subsystem", 00:14:41.225 "req_id": 1 00:14:41.225 } 00:14:41.225 Got JSON-RPC error response 00:14:41.225 response: 00:14:41.225 { 00:14:41.225 "code": -32602, 00:14:41.225 "message": "Invalid SN ^:jT#a9+1+,(Vzp is;RR" 00:14:41.225 }' 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:41.225 { 00:14:41.225 "nqn": "nqn.2016-06.io.spdk:cnode20262", 00:14:41.225 "serial_number": "^:jT#a9+1+,(Vzp is;RR", 00:14:41.225 "method": "nvmf_create_subsystem", 00:14:41.225 "req_id": 1 00:14:41.225 } 00:14:41.225 Got JSON-RPC error response 00:14:41.225 response: 00:14:41.225 { 00:14:41.225 "code": -32602, 00:14:41.225 "message": "Invalid SN ^:jT#a9+1+,(Vzp is;RR" 00:14:41.225 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:41.225 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.226 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:41.484 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:41.484 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:41.484 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.484 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.484 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:41.484 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:41.484 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:41.484 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.484 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.484 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:41.485 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 0 == \- ]] 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '0y|"{xZd^]NunpF(X=Df6q19Ka Wq1nrQuo"DEMbl' 00:14:41.485 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '0y|"{xZd^]NunpF(X=Df6q19Ka Wq1nrQuo"DEMbl' nqn.2016-06.io.spdk:cnode3898 00:14:41.742 [2024-07-25 10:29:45.231505] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3898: invalid model number '0y|"{xZd^]NunpF(X=Df6q19Ka Wq1nrQuo"DEMbl' 00:14:41.742 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:41.742 { 00:14:41.742 "nqn": "nqn.2016-06.io.spdk:cnode3898", 00:14:41.742 "model_number": "0y|\"{xZd^]NunpF(X=Df6q19Ka Wq1nrQuo\"DEMbl", 00:14:41.742 "method": "nvmf_create_subsystem", 00:14:41.742 "req_id": 1 00:14:41.742 } 00:14:41.742 Got JSON-RPC error response 00:14:41.742 response: 00:14:41.742 { 00:14:41.742 "code": -32602, 00:14:41.742 "message": "Invalid MN 0y|\"{xZd^]NunpF(X=Df6q19Ka Wq1nrQuo\"DEMbl" 00:14:41.742 }' 00:14:41.742 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:41.742 { 00:14:41.742 "nqn": "nqn.2016-06.io.spdk:cnode3898", 00:14:41.742 "model_number": "0y|\"{xZd^]NunpF(X=Df6q19Ka Wq1nrQuo\"DEMbl", 00:14:41.742 "method": "nvmf_create_subsystem", 00:14:41.742 "req_id": 1 00:14:41.742 } 00:14:41.742 Got JSON-RPC error response 00:14:41.742 response: 00:14:41.742 { 00:14:41.742 "code": -32602, 00:14:41.742 "message": "Invalid MN 0y|\"{xZd^]NunpF(X=Df6q19Ka Wq1nrQuo\"DEMbl" 00:14:41.742 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:41.743 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:41.743 [2024-07-25 10:29:45.420208] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.000 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:42.000 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:42.000 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:42.000 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:42.000 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:42.000 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:42.258 [2024-07-25 10:29:45.817519] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:42.258 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:42.258 { 00:14:42.258 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:42.258 "listen_address": { 00:14:42.258 "trtype": "tcp", 00:14:42.258 "traddr": "", 00:14:42.258 "trsvcid": "4421" 00:14:42.258 }, 00:14:42.258 "method": "nvmf_subsystem_remove_listener", 00:14:42.258 "req_id": 1 00:14:42.258 } 00:14:42.258 Got JSON-RPC error response 00:14:42.258 response: 00:14:42.258 { 00:14:42.258 "code": -32602, 00:14:42.258 "message": "Invalid parameters" 00:14:42.258 }' 00:14:42.258 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:42.258 { 00:14:42.258 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:42.258 "listen_address": { 00:14:42.258 "trtype": "tcp", 00:14:42.258 "traddr": "", 00:14:42.258 "trsvcid": "4421" 00:14:42.258 }, 00:14:42.258 "method": "nvmf_subsystem_remove_listener", 00:14:42.258 "req_id": 1 00:14:42.258 } 00:14:42.258 Got JSON-RPC error response 00:14:42.258 response: 00:14:42.258 { 00:14:42.258 "code": -32602, 00:14:42.258 "message": "Invalid parameters" 00:14:42.258 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:42.258 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11768 -i 0 00:14:42.516 [2024-07-25 10:29:46.010122] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11768: invalid cntlid range [0-65519] 00:14:42.516 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:42.516 { 00:14:42.516 "nqn": "nqn.2016-06.io.spdk:cnode11768", 00:14:42.516 "min_cntlid": 0, 00:14:42.516 "method": "nvmf_create_subsystem", 00:14:42.516 "req_id": 1 00:14:42.516 } 00:14:42.516 Got JSON-RPC error response 00:14:42.516 response: 00:14:42.516 { 00:14:42.516 "code": -32602, 00:14:42.516 "message": "Invalid cntlid range [0-65519]" 00:14:42.516 }' 00:14:42.516 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:42.516 { 00:14:42.516 "nqn": "nqn.2016-06.io.spdk:cnode11768", 00:14:42.516 "min_cntlid": 0, 00:14:42.516 "method": "nvmf_create_subsystem", 00:14:42.516 "req_id": 1 00:14:42.516 } 00:14:42.516 Got JSON-RPC error response 00:14:42.516 response: 00:14:42.516 { 00:14:42.516 "code": -32602, 00:14:42.516 "message": "Invalid cntlid range [0-65519]" 00:14:42.516 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:42.516 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31299 -i 65520 00:14:42.516 [2024-07-25 10:29:46.194800] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31299: invalid cntlid range [65520-65519] 00:14:42.774 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:42.774 { 00:14:42.774 "nqn": "nqn.2016-06.io.spdk:cnode31299", 00:14:42.774 "min_cntlid": 65520, 00:14:42.774 "method": "nvmf_create_subsystem", 00:14:42.774 "req_id": 1 00:14:42.774 } 00:14:42.774 Got JSON-RPC error response 00:14:42.774 response: 00:14:42.774 { 00:14:42.774 "code": -32602, 00:14:42.774 "message": "Invalid cntlid range [65520-65519]" 00:14:42.774 }' 00:14:42.774 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:42.774 { 00:14:42.774 "nqn": "nqn.2016-06.io.spdk:cnode31299", 00:14:42.774 "min_cntlid": 65520, 00:14:42.774 "method": "nvmf_create_subsystem", 00:14:42.774 "req_id": 1 00:14:42.774 } 00:14:42.774 Got JSON-RPC error response 00:14:42.774 response: 00:14:42.774 { 00:14:42.774 "code": -32602, 00:14:42.774 "message": "Invalid cntlid range [65520-65519]" 00:14:42.774 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:42.774 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7657 -I 0 00:14:42.774 [2024-07-25 10:29:46.379360] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7657: invalid cntlid range [1-0] 00:14:42.774 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:42.774 { 00:14:42.774 "nqn": "nqn.2016-06.io.spdk:cnode7657", 00:14:42.774 "max_cntlid": 0, 00:14:42.774 "method": "nvmf_create_subsystem", 00:14:42.774 "req_id": 1 00:14:42.774 } 00:14:42.774 Got JSON-RPC error response 00:14:42.774 response: 00:14:42.774 { 00:14:42.774 "code": -32602, 00:14:42.774 "message": "Invalid cntlid range [1-0]" 00:14:42.774 }' 00:14:42.774 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:42.774 { 00:14:42.774 "nqn": "nqn.2016-06.io.spdk:cnode7657", 00:14:42.774 "max_cntlid": 0, 00:14:42.774 "method": "nvmf_create_subsystem", 00:14:42.774 "req_id": 1 00:14:42.774 } 00:14:42.774 Got JSON-RPC error response 00:14:42.774 response: 00:14:42.774 { 00:14:42.774 "code": -32602, 00:14:42.774 "message": "Invalid cntlid range [1-0]" 00:14:42.774 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:42.774 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30629 -I 65520 00:14:43.031 [2024-07-25 10:29:46.555927] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30629: invalid cntlid range [1-65520] 00:14:43.031 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:43.031 { 00:14:43.031 "nqn": "nqn.2016-06.io.spdk:cnode30629", 00:14:43.031 "max_cntlid": 65520, 00:14:43.031 "method": "nvmf_create_subsystem", 00:14:43.031 "req_id": 1 00:14:43.031 } 00:14:43.031 Got JSON-RPC error response 00:14:43.031 response: 00:14:43.031 { 00:14:43.031 "code": -32602, 00:14:43.031 "message": "Invalid cntlid range [1-65520]" 00:14:43.031 }' 00:14:43.032 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:43.032 { 00:14:43.032 "nqn": "nqn.2016-06.io.spdk:cnode30629", 00:14:43.032 "max_cntlid": 65520, 00:14:43.032 "method": "nvmf_create_subsystem", 00:14:43.032 "req_id": 1 00:14:43.032 } 00:14:43.032 Got JSON-RPC error response 00:14:43.032 response: 00:14:43.032 { 00:14:43.032 "code": -32602, 00:14:43.032 "message": "Invalid cntlid range [1-65520]" 00:14:43.032 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:43.032 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25063 -i 6 -I 5 00:14:43.032 [2024-07-25 10:29:46.728509] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25063: invalid cntlid range [6-5] 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:43.290 { 00:14:43.290 "nqn": "nqn.2016-06.io.spdk:cnode25063", 00:14:43.290 "min_cntlid": 6, 00:14:43.290 "max_cntlid": 5, 00:14:43.290 "method": "nvmf_create_subsystem", 00:14:43.290 "req_id": 1 00:14:43.290 } 00:14:43.290 Got JSON-RPC error response 00:14:43.290 response: 00:14:43.290 { 00:14:43.290 "code": -32602, 00:14:43.290 "message": "Invalid cntlid range [6-5]" 00:14:43.290 }' 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:43.290 { 00:14:43.290 "nqn": "nqn.2016-06.io.spdk:cnode25063", 00:14:43.290 "min_cntlid": 6, 00:14:43.290 "max_cntlid": 5, 00:14:43.290 "method": "nvmf_create_subsystem", 00:14:43.290 "req_id": 1 00:14:43.290 } 00:14:43.290 Got JSON-RPC error response 00:14:43.290 response: 00:14:43.290 { 00:14:43.290 "code": -32602, 00:14:43.290 "message": "Invalid cntlid range [6-5]" 00:14:43.290 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:43.290 { 00:14:43.290 "name": "foobar", 00:14:43.290 "method": "nvmf_delete_target", 00:14:43.290 "req_id": 1 00:14:43.290 } 00:14:43.290 Got JSON-RPC error response 00:14:43.290 response: 00:14:43.290 { 00:14:43.290 "code": -32602, 00:14:43.290 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:43.290 }' 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:43.290 { 00:14:43.290 "name": "foobar", 00:14:43.290 "method": "nvmf_delete_target", 00:14:43.290 "req_id": 1 00:14:43.290 } 00:14:43.290 Got JSON-RPC error response 00:14:43.290 response: 00:14:43.290 { 00:14:43.290 "code": -32602, 00:14:43.290 "message": "The specified target doesn't exist, cannot delete it." 00:14:43.290 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:43.290 rmmod nvme_tcp 00:14:43.290 rmmod nvme_fabrics 00:14:43.290 rmmod nvme_keyring 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3844563 ']' 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3844563 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 3844563 ']' 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 3844563 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3844563 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3844563' 00:14:43.290 killing process with pid 3844563 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 3844563 00:14:43.290 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 3844563 00:14:43.549 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:43.549 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:43.549 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:43.549 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:43.549 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:43.549 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.549 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.549 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:46.086 00:14:46.086 real 0m13.270s 00:14:46.086 user 0m20.178s 00:14:46.086 sys 0m6.427s 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:46.086 ************************************ 00:14:46.086 END TEST nvmf_invalid 00:14:46.086 ************************************ 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:46.086 ************************************ 00:14:46.086 START TEST nvmf_connect_stress 00:14:46.086 ************************************ 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:46.086 * Looking for test storage... 00:14:46.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.086 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:46.087 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:52.657 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:52.657 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:52.657 Found net devices under 0000:af:00.0: cvl_0_0 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:52.657 Found net devices under 0000:af:00.1: cvl_0_1 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:52.657 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:52.657 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:52.657 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:52.657 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:52.657 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:52.657 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:52.657 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:52.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:14:52.658 00:14:52.658 --- 10.0.0.2 ping statistics --- 00:14:52.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.658 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:52.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:14:52.658 00:14:52.658 --- 10.0.0.1 ping statistics --- 00:14:52.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.658 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3848950 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3848950 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3848950 ']' 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:52.658 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.658 [2024-07-25 10:29:56.308754] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:14:52.658 [2024-07-25 10:29:56.308801] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.658 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.917 [2024-07-25 10:29:56.382231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:52.917 [2024-07-25 10:29:56.455473] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.917 [2024-07-25 10:29:56.455508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.917 [2024-07-25 10:29:56.455518] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.917 [2024-07-25 10:29:56.455527] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.917 [2024-07-25 10:29:56.455534] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.917 [2024-07-25 10:29:56.455638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.917 [2024-07-25 10:29:56.455732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:52.917 [2024-07-25 10:29:56.455735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.484 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:53.484 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:14:53.484 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:53.484 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:53.484 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.484 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.484 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.484 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.484 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.484 [2024-07-25 10:29:57.170536] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.742 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.742 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:53.742 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.742 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.742 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.742 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.742 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.742 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.743 [2024-07-25 10:29:57.209958] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.743 NULL1 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3849228 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.743 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.001 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.001 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:54.001 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.001 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.001 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:54.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.825 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.825 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:54.825 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.825 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.825 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.081 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.081 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:55.081 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.081 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.081 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.339 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.339 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:55.339 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.339 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.339 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.595 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.595 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:55.595 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.595 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.595 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.160 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.161 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:56.161 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.161 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.161 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.419 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.419 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:56.419 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.419 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.419 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.676 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.676 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:56.676 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.676 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.676 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.933 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.933 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:56.933 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.933 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.933 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.496 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.496 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:57.496 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.496 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.496 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.753 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.753 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:57.753 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.753 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.753 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.011 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.011 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:58.011 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.011 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.011 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.269 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.269 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:58.269 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.269 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.269 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.526 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.526 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:58.526 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.526 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.526 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.092 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.092 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:59.092 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.092 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.092 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.351 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.351 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:59.351 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.351 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.351 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.608 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.608 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:59.608 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.608 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.608 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.865 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.865 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:14:59.865 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.865 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.865 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.433 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.433 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:15:00.433 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.433 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.433 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.692 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.692 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:15:00.692 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.692 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.692 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.949 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.949 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:15:00.949 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.949 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.949 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.207 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.207 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:15:01.207 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.207 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.207 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.464 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.464 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:15:01.464 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.464 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.464 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.031 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.031 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:15:02.031 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.031 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.031 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.289 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.289 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:15:02.289 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.289 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.289 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.547 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.547 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:15:02.547 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.547 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.547 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.806 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.806 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:15:02.806 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.806 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.806 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.063 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.063 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:15:03.063 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.321 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.321 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.578 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.578 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:15:03.578 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.578 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.578 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.837 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3849228 00:15:03.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3849228) - No such process 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3849228 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:03.837 rmmod nvme_tcp 00:15:03.837 rmmod nvme_fabrics 00:15:03.837 rmmod nvme_keyring 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3848950 ']' 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3848950 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3848950 ']' 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3848950 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:03.837 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3848950 00:15:04.095 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:04.095 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:04.095 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3848950' 00:15:04.095 killing process with pid 3848950 00:15:04.095 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3848950 00:15:04.095 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3848950 00:15:04.095 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:04.095 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:04.095 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:04.095 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:04.095 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:04.095 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.095 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:04.095 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.632 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:06.632 00:15:06.632 real 0m20.482s 00:15:06.632 user 0m40.933s 00:15:06.632 sys 0m9.998s 00:15:06.632 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:06.632 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.632 ************************************ 00:15:06.632 END TEST nvmf_connect_stress 00:15:06.632 ************************************ 00:15:06.632 10:30:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:06.632 10:30:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:06.632 10:30:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:06.632 10:30:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:06.632 ************************************ 00:15:06.632 START TEST nvmf_fused_ordering 00:15:06.632 ************************************ 00:15:06.632 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:06.632 * Looking for test storage... 00:15:06.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:06.632 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.633 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:06.633 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.633 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:06.633 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:06.633 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:06.633 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:13.196 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:13.196 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:13.196 Found net devices under 0000:af:00.0: cvl_0_0 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:13.196 Found net devices under 0000:af:00.1: cvl_0_1 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:13.196 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:13.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:15:13.197 00:15:13.197 --- 10.0.0.2 ping statistics --- 00:15:13.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.197 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:13.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:15:13.197 00:15:13.197 --- 10.0.0.1 ping statistics --- 00:15:13.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.197 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3855094 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3855094 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3855094 ']' 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:13.197 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:13.455 [2024-07-25 10:30:16.902195] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:15:13.455 [2024-07-25 10:30:16.902247] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.455 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.455 [2024-07-25 10:30:16.974719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.455 [2024-07-25 10:30:17.052189] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.455 [2024-07-25 10:30:17.052227] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.455 [2024-07-25 10:30:17.052237] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.455 [2024-07-25 10:30:17.052245] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.455 [2024-07-25 10:30:17.052255] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.456 [2024-07-25 10:30:17.052277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.022 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.022 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:15:14.022 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:14.022 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:14.022 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.281 [2024-07-25 10:30:17.761445] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.281 [2024-07-25 10:30:17.777608] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.281 NULL1 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.281 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:14.281 [2024-07-25 10:30:17.833681] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:15:14.281 [2024-07-25 10:30:17.833724] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855329 ] 00:15:14.281 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.848 Attached to nqn.2016-06.io.spdk:cnode1 00:15:14.848 Namespace ID: 1 size: 1GB 00:15:14.848 fused_ordering(0) 00:15:14.848 fused_ordering(1) 00:15:14.848 fused_ordering(2) 00:15:14.848 fused_ordering(3) 00:15:14.848 fused_ordering(4) 00:15:14.848 fused_ordering(5) 00:15:14.848 fused_ordering(6) 00:15:14.848 fused_ordering(7) 00:15:14.848 fused_ordering(8) 00:15:14.848 fused_ordering(9) 00:15:14.848 fused_ordering(10) 00:15:14.848 fused_ordering(11) 00:15:14.848 fused_ordering(12) 00:15:14.848 fused_ordering(13) 00:15:14.848 fused_ordering(14) 00:15:14.848 fused_ordering(15) 00:15:14.848 fused_ordering(16) 00:15:14.848 fused_ordering(17) 00:15:14.848 fused_ordering(18) 00:15:14.848 fused_ordering(19) 00:15:14.848 fused_ordering(20) 00:15:14.848 fused_ordering(21) 00:15:14.848 fused_ordering(22) 00:15:14.848 fused_ordering(23) 00:15:14.848 fused_ordering(24) 00:15:14.848 fused_ordering(25) 00:15:14.848 fused_ordering(26) 00:15:14.848 fused_ordering(27) 00:15:14.848 fused_ordering(28) 00:15:14.848 fused_ordering(29) 00:15:14.848 fused_ordering(30) 00:15:14.848 fused_ordering(31) 00:15:14.848 fused_ordering(32) 00:15:14.848 fused_ordering(33) 00:15:14.848 fused_ordering(34) 00:15:14.848 fused_ordering(35) 00:15:14.848 fused_ordering(36) 00:15:14.848 fused_ordering(37) 00:15:14.848 fused_ordering(38) 00:15:14.848 fused_ordering(39) 00:15:14.848 fused_ordering(40) 00:15:14.848 fused_ordering(41) 00:15:14.848 fused_ordering(42) 00:15:14.848 fused_ordering(43) 00:15:14.848 fused_ordering(44) 00:15:14.848 fused_ordering(45) 00:15:14.848 fused_ordering(46) 00:15:14.848 fused_ordering(47) 00:15:14.848 fused_ordering(48) 00:15:14.848 fused_ordering(49) 00:15:14.848 fused_ordering(50) 00:15:14.848 fused_ordering(51) 00:15:14.848 fused_ordering(52) 00:15:14.848 fused_ordering(53) 00:15:14.848 fused_ordering(54) 00:15:14.848 fused_ordering(55) 00:15:14.848 fused_ordering(56) 00:15:14.848 fused_ordering(57) 00:15:14.848 fused_ordering(58) 00:15:14.848 fused_ordering(59) 00:15:14.848 fused_ordering(60) 00:15:14.848 fused_ordering(61) 00:15:14.848 fused_ordering(62) 00:15:14.848 fused_ordering(63) 00:15:14.848 fused_ordering(64) 00:15:14.848 fused_ordering(65) 00:15:14.848 fused_ordering(66) 00:15:14.848 fused_ordering(67) 00:15:14.848 fused_ordering(68) 00:15:14.848 fused_ordering(69) 00:15:14.848 fused_ordering(70) 00:15:14.848 fused_ordering(71) 00:15:14.848 fused_ordering(72) 00:15:14.848 fused_ordering(73) 00:15:14.848 fused_ordering(74) 00:15:14.848 fused_ordering(75) 00:15:14.848 fused_ordering(76) 00:15:14.848 fused_ordering(77) 00:15:14.848 fused_ordering(78) 00:15:14.848 fused_ordering(79) 00:15:14.848 fused_ordering(80) 00:15:14.848 fused_ordering(81) 00:15:14.848 fused_ordering(82) 00:15:14.848 fused_ordering(83) 00:15:14.848 fused_ordering(84) 00:15:14.848 fused_ordering(85) 00:15:14.848 fused_ordering(86) 00:15:14.848 fused_ordering(87) 00:15:14.848 fused_ordering(88) 00:15:14.848 fused_ordering(89) 00:15:14.848 fused_ordering(90) 00:15:14.848 fused_ordering(91) 00:15:14.848 fused_ordering(92) 00:15:14.848 fused_ordering(93) 00:15:14.848 fused_ordering(94) 00:15:14.848 fused_ordering(95) 00:15:14.848 fused_ordering(96) 00:15:14.848 fused_ordering(97) 00:15:14.848 fused_ordering(98) 00:15:14.848 fused_ordering(99) 00:15:14.848 fused_ordering(100) 00:15:14.848 fused_ordering(101) 00:15:14.848 fused_ordering(102) 00:15:14.848 fused_ordering(103) 00:15:14.848 fused_ordering(104) 00:15:14.848 fused_ordering(105) 00:15:14.848 fused_ordering(106) 00:15:14.848 fused_ordering(107) 00:15:14.848 fused_ordering(108) 00:15:14.848 fused_ordering(109) 00:15:14.848 fused_ordering(110) 00:15:14.848 fused_ordering(111) 00:15:14.848 fused_ordering(112) 00:15:14.848 fused_ordering(113) 00:15:14.848 fused_ordering(114) 00:15:14.848 fused_ordering(115) 00:15:14.848 fused_ordering(116) 00:15:14.848 fused_ordering(117) 00:15:14.848 fused_ordering(118) 00:15:14.848 fused_ordering(119) 00:15:14.848 fused_ordering(120) 00:15:14.848 fused_ordering(121) 00:15:14.848 fused_ordering(122) 00:15:14.848 fused_ordering(123) 00:15:14.848 fused_ordering(124) 00:15:14.848 fused_ordering(125) 00:15:14.848 fused_ordering(126) 00:15:14.848 fused_ordering(127) 00:15:14.848 fused_ordering(128) 00:15:14.848 fused_ordering(129) 00:15:14.848 fused_ordering(130) 00:15:14.848 fused_ordering(131) 00:15:14.848 fused_ordering(132) 00:15:14.848 fused_ordering(133) 00:15:14.848 fused_ordering(134) 00:15:14.848 fused_ordering(135) 00:15:14.848 fused_ordering(136) 00:15:14.848 fused_ordering(137) 00:15:14.848 fused_ordering(138) 00:15:14.848 fused_ordering(139) 00:15:14.848 fused_ordering(140) 00:15:14.848 fused_ordering(141) 00:15:14.848 fused_ordering(142) 00:15:14.848 fused_ordering(143) 00:15:14.848 fused_ordering(144) 00:15:14.848 fused_ordering(145) 00:15:14.848 fused_ordering(146) 00:15:14.848 fused_ordering(147) 00:15:14.848 fused_ordering(148) 00:15:14.848 fused_ordering(149) 00:15:14.848 fused_ordering(150) 00:15:14.848 fused_ordering(151) 00:15:14.848 fused_ordering(152) 00:15:14.848 fused_ordering(153) 00:15:14.848 fused_ordering(154) 00:15:14.848 fused_ordering(155) 00:15:14.848 fused_ordering(156) 00:15:14.848 fused_ordering(157) 00:15:14.848 fused_ordering(158) 00:15:14.848 fused_ordering(159) 00:15:14.848 fused_ordering(160) 00:15:14.848 fused_ordering(161) 00:15:14.848 fused_ordering(162) 00:15:14.848 fused_ordering(163) 00:15:14.848 fused_ordering(164) 00:15:14.848 fused_ordering(165) 00:15:14.848 fused_ordering(166) 00:15:14.848 fused_ordering(167) 00:15:14.848 fused_ordering(168) 00:15:14.848 fused_ordering(169) 00:15:14.848 fused_ordering(170) 00:15:14.848 fused_ordering(171) 00:15:14.849 fused_ordering(172) 00:15:14.849 fused_ordering(173) 00:15:14.849 fused_ordering(174) 00:15:14.849 fused_ordering(175) 00:15:14.849 fused_ordering(176) 00:15:14.849 fused_ordering(177) 00:15:14.849 fused_ordering(178) 00:15:14.849 fused_ordering(179) 00:15:14.849 fused_ordering(180) 00:15:14.849 fused_ordering(181) 00:15:14.849 fused_ordering(182) 00:15:14.849 fused_ordering(183) 00:15:14.849 fused_ordering(184) 00:15:14.849 fused_ordering(185) 00:15:14.849 fused_ordering(186) 00:15:14.849 fused_ordering(187) 00:15:14.849 fused_ordering(188) 00:15:14.849 fused_ordering(189) 00:15:14.849 fused_ordering(190) 00:15:14.849 fused_ordering(191) 00:15:14.849 fused_ordering(192) 00:15:14.849 fused_ordering(193) 00:15:14.849 fused_ordering(194) 00:15:14.849 fused_ordering(195) 00:15:14.849 fused_ordering(196) 00:15:14.849 fused_ordering(197) 00:15:14.849 fused_ordering(198) 00:15:14.849 fused_ordering(199) 00:15:14.849 fused_ordering(200) 00:15:14.849 fused_ordering(201) 00:15:14.849 fused_ordering(202) 00:15:14.849 fused_ordering(203) 00:15:14.849 fused_ordering(204) 00:15:14.849 fused_ordering(205) 00:15:15.108 fused_ordering(206) 00:15:15.108 fused_ordering(207) 00:15:15.108 fused_ordering(208) 00:15:15.108 fused_ordering(209) 00:15:15.108 fused_ordering(210) 00:15:15.108 fused_ordering(211) 00:15:15.108 fused_ordering(212) 00:15:15.108 fused_ordering(213) 00:15:15.108 fused_ordering(214) 00:15:15.108 fused_ordering(215) 00:15:15.108 fused_ordering(216) 00:15:15.108 fused_ordering(217) 00:15:15.108 fused_ordering(218) 00:15:15.108 fused_ordering(219) 00:15:15.108 fused_ordering(220) 00:15:15.108 fused_ordering(221) 00:15:15.108 fused_ordering(222) 00:15:15.108 fused_ordering(223) 00:15:15.108 fused_ordering(224) 00:15:15.108 fused_ordering(225) 00:15:15.108 fused_ordering(226) 00:15:15.108 fused_ordering(227) 00:15:15.108 fused_ordering(228) 00:15:15.108 fused_ordering(229) 00:15:15.108 fused_ordering(230) 00:15:15.108 fused_ordering(231) 00:15:15.108 fused_ordering(232) 00:15:15.108 fused_ordering(233) 00:15:15.108 fused_ordering(234) 00:15:15.108 fused_ordering(235) 00:15:15.108 fused_ordering(236) 00:15:15.108 fused_ordering(237) 00:15:15.108 fused_ordering(238) 00:15:15.108 fused_ordering(239) 00:15:15.108 fused_ordering(240) 00:15:15.108 fused_ordering(241) 00:15:15.108 fused_ordering(242) 00:15:15.108 fused_ordering(243) 00:15:15.108 fused_ordering(244) 00:15:15.108 fused_ordering(245) 00:15:15.108 fused_ordering(246) 00:15:15.108 fused_ordering(247) 00:15:15.108 fused_ordering(248) 00:15:15.108 fused_ordering(249) 00:15:15.108 fused_ordering(250) 00:15:15.108 fused_ordering(251) 00:15:15.108 fused_ordering(252) 00:15:15.108 fused_ordering(253) 00:15:15.108 fused_ordering(254) 00:15:15.108 fused_ordering(255) 00:15:15.108 fused_ordering(256) 00:15:15.108 fused_ordering(257) 00:15:15.108 fused_ordering(258) 00:15:15.108 fused_ordering(259) 00:15:15.108 fused_ordering(260) 00:15:15.108 fused_ordering(261) 00:15:15.108 fused_ordering(262) 00:15:15.108 fused_ordering(263) 00:15:15.108 fused_ordering(264) 00:15:15.108 fused_ordering(265) 00:15:15.108 fused_ordering(266) 00:15:15.108 fused_ordering(267) 00:15:15.108 fused_ordering(268) 00:15:15.108 fused_ordering(269) 00:15:15.108 fused_ordering(270) 00:15:15.108 fused_ordering(271) 00:15:15.108 fused_ordering(272) 00:15:15.108 fused_ordering(273) 00:15:15.108 fused_ordering(274) 00:15:15.108 fused_ordering(275) 00:15:15.108 fused_ordering(276) 00:15:15.108 fused_ordering(277) 00:15:15.108 fused_ordering(278) 00:15:15.108 fused_ordering(279) 00:15:15.108 fused_ordering(280) 00:15:15.108 fused_ordering(281) 00:15:15.108 fused_ordering(282) 00:15:15.108 fused_ordering(283) 00:15:15.108 fused_ordering(284) 00:15:15.108 fused_ordering(285) 00:15:15.108 fused_ordering(286) 00:15:15.108 fused_ordering(287) 00:15:15.108 fused_ordering(288) 00:15:15.108 fused_ordering(289) 00:15:15.108 fused_ordering(290) 00:15:15.108 fused_ordering(291) 00:15:15.108 fused_ordering(292) 00:15:15.108 fused_ordering(293) 00:15:15.108 fused_ordering(294) 00:15:15.108 fused_ordering(295) 00:15:15.108 fused_ordering(296) 00:15:15.108 fused_ordering(297) 00:15:15.108 fused_ordering(298) 00:15:15.108 fused_ordering(299) 00:15:15.108 fused_ordering(300) 00:15:15.108 fused_ordering(301) 00:15:15.108 fused_ordering(302) 00:15:15.108 fused_ordering(303) 00:15:15.108 fused_ordering(304) 00:15:15.108 fused_ordering(305) 00:15:15.108 fused_ordering(306) 00:15:15.108 fused_ordering(307) 00:15:15.108 fused_ordering(308) 00:15:15.108 fused_ordering(309) 00:15:15.108 fused_ordering(310) 00:15:15.108 fused_ordering(311) 00:15:15.108 fused_ordering(312) 00:15:15.108 fused_ordering(313) 00:15:15.108 fused_ordering(314) 00:15:15.108 fused_ordering(315) 00:15:15.108 fused_ordering(316) 00:15:15.108 fused_ordering(317) 00:15:15.108 fused_ordering(318) 00:15:15.108 fused_ordering(319) 00:15:15.108 fused_ordering(320) 00:15:15.108 fused_ordering(321) 00:15:15.108 fused_ordering(322) 00:15:15.108 fused_ordering(323) 00:15:15.108 fused_ordering(324) 00:15:15.108 fused_ordering(325) 00:15:15.108 fused_ordering(326) 00:15:15.108 fused_ordering(327) 00:15:15.108 fused_ordering(328) 00:15:15.108 fused_ordering(329) 00:15:15.108 fused_ordering(330) 00:15:15.108 fused_ordering(331) 00:15:15.108 fused_ordering(332) 00:15:15.108 fused_ordering(333) 00:15:15.108 fused_ordering(334) 00:15:15.108 fused_ordering(335) 00:15:15.108 fused_ordering(336) 00:15:15.108 fused_ordering(337) 00:15:15.108 fused_ordering(338) 00:15:15.108 fused_ordering(339) 00:15:15.108 fused_ordering(340) 00:15:15.108 fused_ordering(341) 00:15:15.108 fused_ordering(342) 00:15:15.108 fused_ordering(343) 00:15:15.108 fused_ordering(344) 00:15:15.108 fused_ordering(345) 00:15:15.108 fused_ordering(346) 00:15:15.108 fused_ordering(347) 00:15:15.108 fused_ordering(348) 00:15:15.108 fused_ordering(349) 00:15:15.108 fused_ordering(350) 00:15:15.109 fused_ordering(351) 00:15:15.109 fused_ordering(352) 00:15:15.109 fused_ordering(353) 00:15:15.109 fused_ordering(354) 00:15:15.109 fused_ordering(355) 00:15:15.109 fused_ordering(356) 00:15:15.109 fused_ordering(357) 00:15:15.109 fused_ordering(358) 00:15:15.109 fused_ordering(359) 00:15:15.109 fused_ordering(360) 00:15:15.109 fused_ordering(361) 00:15:15.109 fused_ordering(362) 00:15:15.109 fused_ordering(363) 00:15:15.109 fused_ordering(364) 00:15:15.109 fused_ordering(365) 00:15:15.109 fused_ordering(366) 00:15:15.109 fused_ordering(367) 00:15:15.109 fused_ordering(368) 00:15:15.109 fused_ordering(369) 00:15:15.109 fused_ordering(370) 00:15:15.109 fused_ordering(371) 00:15:15.109 fused_ordering(372) 00:15:15.109 fused_ordering(373) 00:15:15.109 fused_ordering(374) 00:15:15.109 fused_ordering(375) 00:15:15.109 fused_ordering(376) 00:15:15.109 fused_ordering(377) 00:15:15.109 fused_ordering(378) 00:15:15.109 fused_ordering(379) 00:15:15.109 fused_ordering(380) 00:15:15.109 fused_ordering(381) 00:15:15.109 fused_ordering(382) 00:15:15.109 fused_ordering(383) 00:15:15.109 fused_ordering(384) 00:15:15.109 fused_ordering(385) 00:15:15.109 fused_ordering(386) 00:15:15.109 fused_ordering(387) 00:15:15.109 fused_ordering(388) 00:15:15.109 fused_ordering(389) 00:15:15.109 fused_ordering(390) 00:15:15.109 fused_ordering(391) 00:15:15.109 fused_ordering(392) 00:15:15.109 fused_ordering(393) 00:15:15.109 fused_ordering(394) 00:15:15.109 fused_ordering(395) 00:15:15.109 fused_ordering(396) 00:15:15.109 fused_ordering(397) 00:15:15.109 fused_ordering(398) 00:15:15.109 fused_ordering(399) 00:15:15.109 fused_ordering(400) 00:15:15.109 fused_ordering(401) 00:15:15.109 fused_ordering(402) 00:15:15.109 fused_ordering(403) 00:15:15.109 fused_ordering(404) 00:15:15.109 fused_ordering(405) 00:15:15.109 fused_ordering(406) 00:15:15.109 fused_ordering(407) 00:15:15.109 fused_ordering(408) 00:15:15.109 fused_ordering(409) 00:15:15.109 fused_ordering(410) 00:15:15.677 fused_ordering(411) 00:15:15.677 fused_ordering(412) 00:15:15.677 fused_ordering(413) 00:15:15.677 fused_ordering(414) 00:15:15.677 fused_ordering(415) 00:15:15.677 fused_ordering(416) 00:15:15.677 fused_ordering(417) 00:15:15.677 fused_ordering(418) 00:15:15.677 fused_ordering(419) 00:15:15.677 fused_ordering(420) 00:15:15.677 fused_ordering(421) 00:15:15.677 fused_ordering(422) 00:15:15.677 fused_ordering(423) 00:15:15.677 fused_ordering(424) 00:15:15.677 fused_ordering(425) 00:15:15.677 fused_ordering(426) 00:15:15.677 fused_ordering(427) 00:15:15.677 fused_ordering(428) 00:15:15.677 fused_ordering(429) 00:15:15.677 fused_ordering(430) 00:15:15.677 fused_ordering(431) 00:15:15.677 fused_ordering(432) 00:15:15.677 fused_ordering(433) 00:15:15.677 fused_ordering(434) 00:15:15.677 fused_ordering(435) 00:15:15.677 fused_ordering(436) 00:15:15.677 fused_ordering(437) 00:15:15.677 fused_ordering(438) 00:15:15.677 fused_ordering(439) 00:15:15.677 fused_ordering(440) 00:15:15.677 fused_ordering(441) 00:15:15.677 fused_ordering(442) 00:15:15.677 fused_ordering(443) 00:15:15.677 fused_ordering(444) 00:15:15.677 fused_ordering(445) 00:15:15.677 fused_ordering(446) 00:15:15.677 fused_ordering(447) 00:15:15.677 fused_ordering(448) 00:15:15.677 fused_ordering(449) 00:15:15.677 fused_ordering(450) 00:15:15.677 fused_ordering(451) 00:15:15.677 fused_ordering(452) 00:15:15.677 fused_ordering(453) 00:15:15.677 fused_ordering(454) 00:15:15.677 fused_ordering(455) 00:15:15.677 fused_ordering(456) 00:15:15.677 fused_ordering(457) 00:15:15.677 fused_ordering(458) 00:15:15.677 fused_ordering(459) 00:15:15.677 fused_ordering(460) 00:15:15.677 fused_ordering(461) 00:15:15.677 fused_ordering(462) 00:15:15.677 fused_ordering(463) 00:15:15.677 fused_ordering(464) 00:15:15.677 fused_ordering(465) 00:15:15.677 fused_ordering(466) 00:15:15.677 fused_ordering(467) 00:15:15.677 fused_ordering(468) 00:15:15.677 fused_ordering(469) 00:15:15.677 fused_ordering(470) 00:15:15.677 fused_ordering(471) 00:15:15.677 fused_ordering(472) 00:15:15.677 fused_ordering(473) 00:15:15.677 fused_ordering(474) 00:15:15.677 fused_ordering(475) 00:15:15.677 fused_ordering(476) 00:15:15.677 fused_ordering(477) 00:15:15.677 fused_ordering(478) 00:15:15.677 fused_ordering(479) 00:15:15.677 fused_ordering(480) 00:15:15.677 fused_ordering(481) 00:15:15.677 fused_ordering(482) 00:15:15.677 fused_ordering(483) 00:15:15.677 fused_ordering(484) 00:15:15.677 fused_ordering(485) 00:15:15.677 fused_ordering(486) 00:15:15.677 fused_ordering(487) 00:15:15.677 fused_ordering(488) 00:15:15.677 fused_ordering(489) 00:15:15.677 fused_ordering(490) 00:15:15.677 fused_ordering(491) 00:15:15.677 fused_ordering(492) 00:15:15.677 fused_ordering(493) 00:15:15.677 fused_ordering(494) 00:15:15.677 fused_ordering(495) 00:15:15.677 fused_ordering(496) 00:15:15.677 fused_ordering(497) 00:15:15.677 fused_ordering(498) 00:15:15.677 fused_ordering(499) 00:15:15.677 fused_ordering(500) 00:15:15.677 fused_ordering(501) 00:15:15.677 fused_ordering(502) 00:15:15.677 fused_ordering(503) 00:15:15.677 fused_ordering(504) 00:15:15.677 fused_ordering(505) 00:15:15.677 fused_ordering(506) 00:15:15.677 fused_ordering(507) 00:15:15.677 fused_ordering(508) 00:15:15.677 fused_ordering(509) 00:15:15.677 fused_ordering(510) 00:15:15.677 fused_ordering(511) 00:15:15.677 fused_ordering(512) 00:15:15.677 fused_ordering(513) 00:15:15.677 fused_ordering(514) 00:15:15.677 fused_ordering(515) 00:15:15.677 fused_ordering(516) 00:15:15.677 fused_ordering(517) 00:15:15.677 fused_ordering(518) 00:15:15.677 fused_ordering(519) 00:15:15.677 fused_ordering(520) 00:15:15.678 fused_ordering(521) 00:15:15.678 fused_ordering(522) 00:15:15.678 fused_ordering(523) 00:15:15.678 fused_ordering(524) 00:15:15.678 fused_ordering(525) 00:15:15.678 fused_ordering(526) 00:15:15.678 fused_ordering(527) 00:15:15.678 fused_ordering(528) 00:15:15.678 fused_ordering(529) 00:15:15.678 fused_ordering(530) 00:15:15.678 fused_ordering(531) 00:15:15.678 fused_ordering(532) 00:15:15.678 fused_ordering(533) 00:15:15.678 fused_ordering(534) 00:15:15.678 fused_ordering(535) 00:15:15.678 fused_ordering(536) 00:15:15.678 fused_ordering(537) 00:15:15.678 fused_ordering(538) 00:15:15.678 fused_ordering(539) 00:15:15.678 fused_ordering(540) 00:15:15.678 fused_ordering(541) 00:15:15.678 fused_ordering(542) 00:15:15.678 fused_ordering(543) 00:15:15.678 fused_ordering(544) 00:15:15.678 fused_ordering(545) 00:15:15.678 fused_ordering(546) 00:15:15.678 fused_ordering(547) 00:15:15.678 fused_ordering(548) 00:15:15.678 fused_ordering(549) 00:15:15.678 fused_ordering(550) 00:15:15.678 fused_ordering(551) 00:15:15.678 fused_ordering(552) 00:15:15.678 fused_ordering(553) 00:15:15.678 fused_ordering(554) 00:15:15.678 fused_ordering(555) 00:15:15.678 fused_ordering(556) 00:15:15.678 fused_ordering(557) 00:15:15.678 fused_ordering(558) 00:15:15.678 fused_ordering(559) 00:15:15.678 fused_ordering(560) 00:15:15.678 fused_ordering(561) 00:15:15.678 fused_ordering(562) 00:15:15.678 fused_ordering(563) 00:15:15.678 fused_ordering(564) 00:15:15.678 fused_ordering(565) 00:15:15.678 fused_ordering(566) 00:15:15.678 fused_ordering(567) 00:15:15.678 fused_ordering(568) 00:15:15.678 fused_ordering(569) 00:15:15.678 fused_ordering(570) 00:15:15.678 fused_ordering(571) 00:15:15.678 fused_ordering(572) 00:15:15.678 fused_ordering(573) 00:15:15.678 fused_ordering(574) 00:15:15.678 fused_ordering(575) 00:15:15.678 fused_ordering(576) 00:15:15.678 fused_ordering(577) 00:15:15.678 fused_ordering(578) 00:15:15.678 fused_ordering(579) 00:15:15.678 fused_ordering(580) 00:15:15.678 fused_ordering(581) 00:15:15.678 fused_ordering(582) 00:15:15.678 fused_ordering(583) 00:15:15.678 fused_ordering(584) 00:15:15.678 fused_ordering(585) 00:15:15.678 fused_ordering(586) 00:15:15.678 fused_ordering(587) 00:15:15.678 fused_ordering(588) 00:15:15.678 fused_ordering(589) 00:15:15.678 fused_ordering(590) 00:15:15.678 fused_ordering(591) 00:15:15.678 fused_ordering(592) 00:15:15.678 fused_ordering(593) 00:15:15.678 fused_ordering(594) 00:15:15.678 fused_ordering(595) 00:15:15.678 fused_ordering(596) 00:15:15.678 fused_ordering(597) 00:15:15.678 fused_ordering(598) 00:15:15.678 fused_ordering(599) 00:15:15.678 fused_ordering(600) 00:15:15.678 fused_ordering(601) 00:15:15.678 fused_ordering(602) 00:15:15.678 fused_ordering(603) 00:15:15.678 fused_ordering(604) 00:15:15.678 fused_ordering(605) 00:15:15.678 fused_ordering(606) 00:15:15.678 fused_ordering(607) 00:15:15.678 fused_ordering(608) 00:15:15.678 fused_ordering(609) 00:15:15.678 fused_ordering(610) 00:15:15.678 fused_ordering(611) 00:15:15.678 fused_ordering(612) 00:15:15.678 fused_ordering(613) 00:15:15.678 fused_ordering(614) 00:15:15.678 fused_ordering(615) 00:15:16.245 fused_ordering(616) 00:15:16.245 fused_ordering(617) 00:15:16.245 fused_ordering(618) 00:15:16.245 fused_ordering(619) 00:15:16.245 fused_ordering(620) 00:15:16.245 fused_ordering(621) 00:15:16.245 fused_ordering(622) 00:15:16.246 fused_ordering(623) 00:15:16.246 fused_ordering(624) 00:15:16.246 fused_ordering(625) 00:15:16.246 fused_ordering(626) 00:15:16.246 fused_ordering(627) 00:15:16.246 fused_ordering(628) 00:15:16.246 fused_ordering(629) 00:15:16.246 fused_ordering(630) 00:15:16.246 fused_ordering(631) 00:15:16.246 fused_ordering(632) 00:15:16.246 fused_ordering(633) 00:15:16.246 fused_ordering(634) 00:15:16.246 fused_ordering(635) 00:15:16.246 fused_ordering(636) 00:15:16.246 fused_ordering(637) 00:15:16.246 fused_ordering(638) 00:15:16.246 fused_ordering(639) 00:15:16.246 fused_ordering(640) 00:15:16.246 fused_ordering(641) 00:15:16.246 fused_ordering(642) 00:15:16.246 fused_ordering(643) 00:15:16.246 fused_ordering(644) 00:15:16.246 fused_ordering(645) 00:15:16.246 fused_ordering(646) 00:15:16.246 fused_ordering(647) 00:15:16.246 fused_ordering(648) 00:15:16.246 fused_ordering(649) 00:15:16.246 fused_ordering(650) 00:15:16.246 fused_ordering(651) 00:15:16.246 fused_ordering(652) 00:15:16.246 fused_ordering(653) 00:15:16.246 fused_ordering(654) 00:15:16.246 fused_ordering(655) 00:15:16.246 fused_ordering(656) 00:15:16.246 fused_ordering(657) 00:15:16.246 fused_ordering(658) 00:15:16.246 fused_ordering(659) 00:15:16.246 fused_ordering(660) 00:15:16.246 fused_ordering(661) 00:15:16.246 fused_ordering(662) 00:15:16.246 fused_ordering(663) 00:15:16.246 fused_ordering(664) 00:15:16.246 fused_ordering(665) 00:15:16.246 fused_ordering(666) 00:15:16.246 fused_ordering(667) 00:15:16.246 fused_ordering(668) 00:15:16.246 fused_ordering(669) 00:15:16.246 fused_ordering(670) 00:15:16.246 fused_ordering(671) 00:15:16.246 fused_ordering(672) 00:15:16.246 fused_ordering(673) 00:15:16.246 fused_ordering(674) 00:15:16.246 fused_ordering(675) 00:15:16.246 fused_ordering(676) 00:15:16.246 fused_ordering(677) 00:15:16.246 fused_ordering(678) 00:15:16.246 fused_ordering(679) 00:15:16.246 fused_ordering(680) 00:15:16.246 fused_ordering(681) 00:15:16.246 fused_ordering(682) 00:15:16.246 fused_ordering(683) 00:15:16.246 fused_ordering(684) 00:15:16.246 fused_ordering(685) 00:15:16.246 fused_ordering(686) 00:15:16.246 fused_ordering(687) 00:15:16.246 fused_ordering(688) 00:15:16.246 fused_ordering(689) 00:15:16.246 fused_ordering(690) 00:15:16.246 fused_ordering(691) 00:15:16.246 fused_ordering(692) 00:15:16.246 fused_ordering(693) 00:15:16.246 fused_ordering(694) 00:15:16.246 fused_ordering(695) 00:15:16.246 fused_ordering(696) 00:15:16.246 fused_ordering(697) 00:15:16.246 fused_ordering(698) 00:15:16.246 fused_ordering(699) 00:15:16.246 fused_ordering(700) 00:15:16.246 fused_ordering(701) 00:15:16.246 fused_ordering(702) 00:15:16.246 fused_ordering(703) 00:15:16.246 fused_ordering(704) 00:15:16.246 fused_ordering(705) 00:15:16.246 fused_ordering(706) 00:15:16.246 fused_ordering(707) 00:15:16.246 fused_ordering(708) 00:15:16.246 fused_ordering(709) 00:15:16.246 fused_ordering(710) 00:15:16.246 fused_ordering(711) 00:15:16.246 fused_ordering(712) 00:15:16.246 fused_ordering(713) 00:15:16.246 fused_ordering(714) 00:15:16.246 fused_ordering(715) 00:15:16.246 fused_ordering(716) 00:15:16.246 fused_ordering(717) 00:15:16.246 fused_ordering(718) 00:15:16.246 fused_ordering(719) 00:15:16.246 fused_ordering(720) 00:15:16.246 fused_ordering(721) 00:15:16.246 fused_ordering(722) 00:15:16.246 fused_ordering(723) 00:15:16.246 fused_ordering(724) 00:15:16.246 fused_ordering(725) 00:15:16.246 fused_ordering(726) 00:15:16.246 fused_ordering(727) 00:15:16.246 fused_ordering(728) 00:15:16.246 fused_ordering(729) 00:15:16.246 fused_ordering(730) 00:15:16.246 fused_ordering(731) 00:15:16.246 fused_ordering(732) 00:15:16.246 fused_ordering(733) 00:15:16.246 fused_ordering(734) 00:15:16.246 fused_ordering(735) 00:15:16.246 fused_ordering(736) 00:15:16.246 fused_ordering(737) 00:15:16.246 fused_ordering(738) 00:15:16.246 fused_ordering(739) 00:15:16.246 fused_ordering(740) 00:15:16.246 fused_ordering(741) 00:15:16.246 fused_ordering(742) 00:15:16.246 fused_ordering(743) 00:15:16.246 fused_ordering(744) 00:15:16.246 fused_ordering(745) 00:15:16.246 fused_ordering(746) 00:15:16.246 fused_ordering(747) 00:15:16.246 fused_ordering(748) 00:15:16.246 fused_ordering(749) 00:15:16.246 fused_ordering(750) 00:15:16.246 fused_ordering(751) 00:15:16.246 fused_ordering(752) 00:15:16.246 fused_ordering(753) 00:15:16.246 fused_ordering(754) 00:15:16.246 fused_ordering(755) 00:15:16.246 fused_ordering(756) 00:15:16.246 fused_ordering(757) 00:15:16.246 fused_ordering(758) 00:15:16.246 fused_ordering(759) 00:15:16.246 fused_ordering(760) 00:15:16.246 fused_ordering(761) 00:15:16.246 fused_ordering(762) 00:15:16.246 fused_ordering(763) 00:15:16.246 fused_ordering(764) 00:15:16.246 fused_ordering(765) 00:15:16.246 fused_ordering(766) 00:15:16.246 fused_ordering(767) 00:15:16.246 fused_ordering(768) 00:15:16.246 fused_ordering(769) 00:15:16.246 fused_ordering(770) 00:15:16.246 fused_ordering(771) 00:15:16.246 fused_ordering(772) 00:15:16.246 fused_ordering(773) 00:15:16.246 fused_ordering(774) 00:15:16.246 fused_ordering(775) 00:15:16.246 fused_ordering(776) 00:15:16.246 fused_ordering(777) 00:15:16.246 fused_ordering(778) 00:15:16.246 fused_ordering(779) 00:15:16.246 fused_ordering(780) 00:15:16.246 fused_ordering(781) 00:15:16.246 fused_ordering(782) 00:15:16.246 fused_ordering(783) 00:15:16.246 fused_ordering(784) 00:15:16.246 fused_ordering(785) 00:15:16.246 fused_ordering(786) 00:15:16.246 fused_ordering(787) 00:15:16.246 fused_ordering(788) 00:15:16.246 fused_ordering(789) 00:15:16.246 fused_ordering(790) 00:15:16.246 fused_ordering(791) 00:15:16.246 fused_ordering(792) 00:15:16.246 fused_ordering(793) 00:15:16.246 fused_ordering(794) 00:15:16.246 fused_ordering(795) 00:15:16.246 fused_ordering(796) 00:15:16.246 fused_ordering(797) 00:15:16.246 fused_ordering(798) 00:15:16.246 fused_ordering(799) 00:15:16.246 fused_ordering(800) 00:15:16.246 fused_ordering(801) 00:15:16.246 fused_ordering(802) 00:15:16.246 fused_ordering(803) 00:15:16.246 fused_ordering(804) 00:15:16.246 fused_ordering(805) 00:15:16.246 fused_ordering(806) 00:15:16.246 fused_ordering(807) 00:15:16.246 fused_ordering(808) 00:15:16.246 fused_ordering(809) 00:15:16.246 fused_ordering(810) 00:15:16.246 fused_ordering(811) 00:15:16.246 fused_ordering(812) 00:15:16.246 fused_ordering(813) 00:15:16.246 fused_ordering(814) 00:15:16.246 fused_ordering(815) 00:15:16.246 fused_ordering(816) 00:15:16.246 fused_ordering(817) 00:15:16.246 fused_ordering(818) 00:15:16.246 fused_ordering(819) 00:15:16.246 fused_ordering(820) 00:15:16.815 fused_ordering(821) 00:15:16.815 fused_ordering(822) 00:15:16.815 fused_ordering(823) 00:15:16.815 fused_ordering(824) 00:15:16.815 fused_ordering(825) 00:15:16.815 fused_ordering(826) 00:15:16.815 fused_ordering(827) 00:15:16.815 fused_ordering(828) 00:15:16.815 fused_ordering(829) 00:15:16.815 fused_ordering(830) 00:15:16.815 fused_ordering(831) 00:15:16.815 fused_ordering(832) 00:15:16.815 fused_ordering(833) 00:15:16.815 fused_ordering(834) 00:15:16.815 fused_ordering(835) 00:15:16.815 fused_ordering(836) 00:15:16.815 fused_ordering(837) 00:15:16.815 fused_ordering(838) 00:15:16.815 fused_ordering(839) 00:15:16.815 fused_ordering(840) 00:15:16.815 fused_ordering(841) 00:15:16.815 fused_ordering(842) 00:15:16.815 fused_ordering(843) 00:15:16.815 fused_ordering(844) 00:15:16.815 fused_ordering(845) 00:15:16.815 fused_ordering(846) 00:15:16.815 fused_ordering(847) 00:15:16.815 fused_ordering(848) 00:15:16.815 fused_ordering(849) 00:15:16.815 fused_ordering(850) 00:15:16.815 fused_ordering(851) 00:15:16.815 fused_ordering(852) 00:15:16.815 fused_ordering(853) 00:15:16.815 fused_ordering(854) 00:15:16.815 fused_ordering(855) 00:15:16.815 fused_ordering(856) 00:15:16.815 fused_ordering(857) 00:15:16.815 fused_ordering(858) 00:15:16.815 fused_ordering(859) 00:15:16.815 fused_ordering(860) 00:15:16.815 fused_ordering(861) 00:15:16.815 fused_ordering(862) 00:15:16.815 fused_ordering(863) 00:15:16.815 fused_ordering(864) 00:15:16.815 fused_ordering(865) 00:15:16.815 fused_ordering(866) 00:15:16.815 fused_ordering(867) 00:15:16.815 fused_ordering(868) 00:15:16.815 fused_ordering(869) 00:15:16.815 fused_ordering(870) 00:15:16.815 fused_ordering(871) 00:15:16.815 fused_ordering(872) 00:15:16.815 fused_ordering(873) 00:15:16.815 fused_ordering(874) 00:15:16.815 fused_ordering(875) 00:15:16.815 fused_ordering(876) 00:15:16.815 fused_ordering(877) 00:15:16.815 fused_ordering(878) 00:15:16.815 fused_ordering(879) 00:15:16.815 fused_ordering(880) 00:15:16.815 fused_ordering(881) 00:15:16.815 fused_ordering(882) 00:15:16.815 fused_ordering(883) 00:15:16.815 fused_ordering(884) 00:15:16.815 fused_ordering(885) 00:15:16.815 fused_ordering(886) 00:15:16.815 fused_ordering(887) 00:15:16.815 fused_ordering(888) 00:15:16.815 fused_ordering(889) 00:15:16.815 fused_ordering(890) 00:15:16.815 fused_ordering(891) 00:15:16.815 fused_ordering(892) 00:15:16.815 fused_ordering(893) 00:15:16.815 fused_ordering(894) 00:15:16.815 fused_ordering(895) 00:15:16.815 fused_ordering(896) 00:15:16.815 fused_ordering(897) 00:15:16.815 fused_ordering(898) 00:15:16.815 fused_ordering(899) 00:15:16.815 fused_ordering(900) 00:15:16.815 fused_ordering(901) 00:15:16.815 fused_ordering(902) 00:15:16.815 fused_ordering(903) 00:15:16.815 fused_ordering(904) 00:15:16.815 fused_ordering(905) 00:15:16.815 fused_ordering(906) 00:15:16.815 fused_ordering(907) 00:15:16.815 fused_ordering(908) 00:15:16.815 fused_ordering(909) 00:15:16.815 fused_ordering(910) 00:15:16.815 fused_ordering(911) 00:15:16.815 fused_ordering(912) 00:15:16.815 fused_ordering(913) 00:15:16.815 fused_ordering(914) 00:15:16.815 fused_ordering(915) 00:15:16.815 fused_ordering(916) 00:15:16.815 fused_ordering(917) 00:15:16.815 fused_ordering(918) 00:15:16.815 fused_ordering(919) 00:15:16.815 fused_ordering(920) 00:15:16.815 fused_ordering(921) 00:15:16.815 fused_ordering(922) 00:15:16.815 fused_ordering(923) 00:15:16.815 fused_ordering(924) 00:15:16.815 fused_ordering(925) 00:15:16.815 fused_ordering(926) 00:15:16.815 fused_ordering(927) 00:15:16.815 fused_ordering(928) 00:15:16.815 fused_ordering(929) 00:15:16.815 fused_ordering(930) 00:15:16.815 fused_ordering(931) 00:15:16.815 fused_ordering(932) 00:15:16.815 fused_ordering(933) 00:15:16.815 fused_ordering(934) 00:15:16.816 fused_ordering(935) 00:15:16.816 fused_ordering(936) 00:15:16.816 fused_ordering(937) 00:15:16.816 fused_ordering(938) 00:15:16.816 fused_ordering(939) 00:15:16.816 fused_ordering(940) 00:15:16.816 fused_ordering(941) 00:15:16.816 fused_ordering(942) 00:15:16.816 fused_ordering(943) 00:15:16.816 fused_ordering(944) 00:15:16.816 fused_ordering(945) 00:15:16.816 fused_ordering(946) 00:15:16.816 fused_ordering(947) 00:15:16.816 fused_ordering(948) 00:15:16.816 fused_ordering(949) 00:15:16.816 fused_ordering(950) 00:15:16.816 fused_ordering(951) 00:15:16.816 fused_ordering(952) 00:15:16.816 fused_ordering(953) 00:15:16.816 fused_ordering(954) 00:15:16.816 fused_ordering(955) 00:15:16.816 fused_ordering(956) 00:15:16.816 fused_ordering(957) 00:15:16.816 fused_ordering(958) 00:15:16.816 fused_ordering(959) 00:15:16.816 fused_ordering(960) 00:15:16.816 fused_ordering(961) 00:15:16.816 fused_ordering(962) 00:15:16.816 fused_ordering(963) 00:15:16.816 fused_ordering(964) 00:15:16.816 fused_ordering(965) 00:15:16.816 fused_ordering(966) 00:15:16.816 fused_ordering(967) 00:15:16.816 fused_ordering(968) 00:15:16.816 fused_ordering(969) 00:15:16.816 fused_ordering(970) 00:15:16.816 fused_ordering(971) 00:15:16.816 fused_ordering(972) 00:15:16.816 fused_ordering(973) 00:15:16.816 fused_ordering(974) 00:15:16.816 fused_ordering(975) 00:15:16.816 fused_ordering(976) 00:15:16.816 fused_ordering(977) 00:15:16.816 fused_ordering(978) 00:15:16.816 fused_ordering(979) 00:15:16.816 fused_ordering(980) 00:15:16.816 fused_ordering(981) 00:15:16.816 fused_ordering(982) 00:15:16.816 fused_ordering(983) 00:15:16.816 fused_ordering(984) 00:15:16.816 fused_ordering(985) 00:15:16.816 fused_ordering(986) 00:15:16.816 fused_ordering(987) 00:15:16.816 fused_ordering(988) 00:15:16.816 fused_ordering(989) 00:15:16.816 fused_ordering(990) 00:15:16.816 fused_ordering(991) 00:15:16.816 fused_ordering(992) 00:15:16.816 fused_ordering(993) 00:15:16.816 fused_ordering(994) 00:15:16.816 fused_ordering(995) 00:15:16.816 fused_ordering(996) 00:15:16.816 fused_ordering(997) 00:15:16.816 fused_ordering(998) 00:15:16.816 fused_ordering(999) 00:15:16.816 fused_ordering(1000) 00:15:16.816 fused_ordering(1001) 00:15:16.816 fused_ordering(1002) 00:15:16.816 fused_ordering(1003) 00:15:16.816 fused_ordering(1004) 00:15:16.816 fused_ordering(1005) 00:15:16.816 fused_ordering(1006) 00:15:16.816 fused_ordering(1007) 00:15:16.816 fused_ordering(1008) 00:15:16.816 fused_ordering(1009) 00:15:16.816 fused_ordering(1010) 00:15:16.816 fused_ordering(1011) 00:15:16.816 fused_ordering(1012) 00:15:16.816 fused_ordering(1013) 00:15:16.816 fused_ordering(1014) 00:15:16.816 fused_ordering(1015) 00:15:16.816 fused_ordering(1016) 00:15:16.816 fused_ordering(1017) 00:15:16.816 fused_ordering(1018) 00:15:16.816 fused_ordering(1019) 00:15:16.816 fused_ordering(1020) 00:15:16.816 fused_ordering(1021) 00:15:16.816 fused_ordering(1022) 00:15:16.816 fused_ordering(1023) 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:16.816 rmmod nvme_tcp 00:15:16.816 rmmod nvme_fabrics 00:15:16.816 rmmod nvme_keyring 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3855094 ']' 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3855094 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3855094 ']' 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3855094 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3855094 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3855094' 00:15:16.816 killing process with pid 3855094 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3855094 00:15:16.816 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3855094 00:15:17.076 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:17.076 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:17.076 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:17.076 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:17.076 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:17.076 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.076 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.076 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.040 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:19.040 00:15:19.040 real 0m12.750s 00:15:19.040 user 0m6.407s 00:15:19.040 sys 0m7.314s 00:15:19.040 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:19.040 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.040 ************************************ 00:15:19.040 END TEST nvmf_fused_ordering 00:15:19.040 ************************************ 00:15:19.040 10:30:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:19.040 10:30:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:19.040 10:30:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:19.040 10:30:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:19.040 ************************************ 00:15:19.040 START TEST nvmf_ns_masking 00:15:19.040 ************************************ 00:15:19.040 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:19.301 * Looking for test storage... 00:15:19.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2de8afba-d996-4b4b-8e4f-3189492ad55d 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=bb4ed098-375f-4c4b-8415-058e85aac4e3 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=63d70608-26f3-4e84-b680-79d82114adcd 00:15:19.301 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:19.302 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:19.302 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.302 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:19.302 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:19.302 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:19.302 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.302 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:19.302 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.302 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:19.302 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:19.302 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:19.302 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:25.864 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:25.864 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:25.864 Found net devices under 0000:af:00.0: cvl_0_0 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:25.864 Found net devices under 0000:af:00.1: cvl_0_1 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.864 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:26.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:15:26.122 00:15:26.122 --- 10.0.0.2 ping statistics --- 00:15:26.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.122 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:26.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:15:26.122 00:15:26.122 --- 10.0.0.1 ping statistics --- 00:15:26.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.122 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3859430 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3859430 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3859430 ']' 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.122 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:26.380 [2024-07-25 10:30:29.826910] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:15:26.380 [2024-07-25 10:30:29.826957] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.380 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.380 [2024-07-25 10:30:29.900371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.380 [2024-07-25 10:30:29.972132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.380 [2024-07-25 10:30:29.972172] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.380 [2024-07-25 10:30:29.972182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.380 [2024-07-25 10:30:29.972191] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.380 [2024-07-25 10:30:29.972202] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.380 [2024-07-25 10:30:29.972224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.946 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.946 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:26.946 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:26.946 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:26.946 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:27.205 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.205 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:27.205 [2024-07-25 10:30:30.820333] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.205 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:27.205 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:27.205 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:27.464 Malloc1 00:15:27.464 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:27.722 Malloc2 00:15:27.722 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:27.722 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:27.980 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.238 [2024-07-25 10:30:31.694408] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.238 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:28.238 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 63d70608-26f3-4e84-b680-79d82114adcd -a 10.0.0.2 -s 4420 -i 4 00:15:28.238 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:28.238 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:28.238 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:28.238 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:28.238 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:30.769 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:30.769 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:30.769 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:30.769 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:30.769 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:30.769 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:30.769 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:30.769 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:30.769 [ 0]:0x1 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df05f404032046bdb11c26619c251c7e 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df05f404032046bdb11c26619c251c7e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:30.769 [ 0]:0x1 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df05f404032046bdb11c26619c251c7e 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df05f404032046bdb11c26619c251c7e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:30.769 [ 1]:0x2 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a01b2e40dda74a948d3c8760d7452e74 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a01b2e40dda74a948d3c8760d7452e74 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:30.769 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:31.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.028 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.028 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:31.287 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:31.287 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 63d70608-26f3-4e84-b680-79d82114adcd -a 10.0.0.2 -s 4420 -i 4 00:15:31.545 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:31.545 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:31.545 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:31.545 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:31.545 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:31.545 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:33.443 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:33.701 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:33.702 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:33.702 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:33.702 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:33.702 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:33.702 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:33.702 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:33.702 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:33.702 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:33.702 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:33.702 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:33.702 [ 0]:0x2 00:15:33.702 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:33.702 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:33.702 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a01b2e40dda74a948d3c8760d7452e74 00:15:33.702 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a01b2e40dda74a948d3c8760d7452e74 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:33.702 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:33.960 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:33.960 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:33.960 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:33.960 [ 0]:0x1 00:15:33.960 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:33.960 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:33.960 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df05f404032046bdb11c26619c251c7e 00:15:33.960 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df05f404032046bdb11c26619c251c7e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:33.960 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:33.960 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:33.960 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:33.960 [ 1]:0x2 00:15:33.960 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:33.960 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:33.960 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a01b2e40dda74a948d3c8760d7452e74 00:15:33.960 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a01b2e40dda74a948d3c8760d7452e74 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:33.960 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:34.218 [ 0]:0x2 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a01b2e40dda74a948d3c8760d7452e74 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a01b2e40dda74a948d3c8760d7452e74 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:34.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.218 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:34.476 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:34.476 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 63d70608-26f3-4e84-b680-79d82114adcd -a 10.0.0.2 -s 4420 -i 4 00:15:34.734 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:34.734 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:34.734 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:34.734 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:34.734 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:34.734 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:36.636 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:36.636 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:36.636 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:36.636 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:36.636 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:36.636 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:36.636 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:36.636 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:36.895 [ 0]:0x1 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df05f404032046bdb11c26619c251c7e 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df05f404032046bdb11c26619c251c7e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:36.895 [ 1]:0x2 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a01b2e40dda74a948d3c8760d7452e74 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a01b2e40dda74a948d3c8760d7452e74 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:36.895 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:37.155 [ 0]:0x2 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a01b2e40dda74a948d3c8760d7452e74 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a01b2e40dda74a948d3c8760d7452e74 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:37.155 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:37.414 [2024-07-25 10:30:40.916984] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:37.414 request: 00:15:37.414 { 00:15:37.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.414 "nsid": 2, 00:15:37.414 "host": "nqn.2016-06.io.spdk:host1", 00:15:37.414 "method": "nvmf_ns_remove_host", 00:15:37.414 "req_id": 1 00:15:37.414 } 00:15:37.414 Got JSON-RPC error response 00:15:37.414 response: 00:15:37.414 { 00:15:37.414 "code": -32602, 00:15:37.414 "message": "Invalid parameters" 00:15:37.414 } 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:37.414 [ 0]:0x2 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:37.414 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:37.414 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a01b2e40dda74a948d3c8760d7452e74 00:15:37.414 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a01b2e40dda74a948d3c8760d7452e74 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:37.414 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:37.414 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:37.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.414 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3861545 00:15:37.414 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.414 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:37.414 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3861545 /var/tmp/host.sock 00:15:37.414 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3861545 ']' 00:15:37.414 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:37.414 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:37.415 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:37.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:37.415 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:37.415 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:37.415 [2024-07-25 10:30:41.113375] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:15:37.415 [2024-07-25 10:30:41.113426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3861545 ] 00:15:37.674 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.674 [2024-07-25 10:30:41.182168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.674 [2024-07-25 10:30:41.250849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.241 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:38.241 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:38.241 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.500 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:38.758 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2de8afba-d996-4b4b-8e4f-3189492ad55d 00:15:38.758 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:38.758 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2DE8AFBAD9964B4B8E4F3189492AD55D -i 00:15:38.758 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid bb4ed098-375f-4c4b-8415-058e85aac4e3 00:15:38.758 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:38.758 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g BB4ED098375F4C4B8415058E85AAC4E3 -i 00:15:39.017 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:39.275 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:39.275 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:39.275 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:39.853 nvme0n1 00:15:39.853 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:39.853 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:40.113 nvme1n2 00:15:40.113 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:40.113 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:40.113 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:40.113 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:40.113 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:40.113 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:40.113 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:40.113 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:40.113 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:40.371 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2de8afba-d996-4b4b-8e4f-3189492ad55d == \2\d\e\8\a\f\b\a\-\d\9\9\6\-\4\b\4\b\-\8\e\4\f\-\3\1\8\9\4\9\2\a\d\5\5\d ]] 00:15:40.371 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:40.371 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:40.371 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:40.630 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ bb4ed098-375f-4c4b-8415-058e85aac4e3 == \b\b\4\e\d\0\9\8\-\3\7\5\f\-\4\c\4\b\-\8\4\1\5\-\0\5\8\e\8\5\a\a\c\4\e\3 ]] 00:15:40.630 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3861545 00:15:40.630 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3861545 ']' 00:15:40.630 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3861545 00:15:40.630 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:40.630 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:40.630 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3861545 00:15:40.630 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:40.630 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:40.630 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3861545' 00:15:40.630 killing process with pid 3861545 00:15:40.630 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3861545 00:15:40.630 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3861545 00:15:40.889 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.147 rmmod nvme_tcp 00:15:41.147 rmmod nvme_fabrics 00:15:41.147 rmmod nvme_keyring 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3859430 ']' 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3859430 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3859430 ']' 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3859430 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3859430 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3859430' 00:15:41.147 killing process with pid 3859430 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3859430 00:15:41.147 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3859430 00:15:41.406 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:41.406 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:41.406 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:41.406 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.406 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.406 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.406 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.406 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.939 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:43.939 00:15:43.939 real 0m24.407s 00:15:43.939 user 0m24.207s 00:15:43.939 sys 0m8.275s 00:15:43.939 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:43.939 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:43.939 ************************************ 00:15:43.939 END TEST nvmf_ns_masking 00:15:43.939 ************************************ 00:15:43.939 10:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:43.939 10:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:43.940 ************************************ 00:15:43.940 START TEST nvmf_nvme_cli 00:15:43.940 ************************************ 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:43.940 * Looking for test storage... 00:15:43.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:43.940 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.506 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:50.507 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:50.507 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:50.507 Found net devices under 0000:af:00.0: cvl_0_0 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:50.507 Found net devices under 0000:af:00.1: cvl_0_1 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:50.507 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:50.507 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:50.507 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:50.507 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:50.507 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:50.766 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:50.766 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:50.766 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:50.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:15:50.766 00:15:50.766 --- 10.0.0.2 ping statistics --- 00:15:50.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.766 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:15:50.766 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:50.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:15:50.766 00:15:50.766 --- 10.0.0.1 ping statistics --- 00:15:50.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.766 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:15:50.766 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.766 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:50.766 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:50.766 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.766 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:50.766 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:50.766 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.767 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:50.767 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:50.767 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:50.767 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:50.767 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:50.767 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:50.767 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3865795 00:15:50.767 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:50.767 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3865795 00:15:50.767 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3865795 ']' 00:15:50.767 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.767 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:50.767 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.767 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:50.767 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:50.767 [2024-07-25 10:30:54.360322] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:15:50.767 [2024-07-25 10:30:54.360369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.767 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.767 [2024-07-25 10:30:54.433873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.025 [2024-07-25 10:30:54.509667] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.025 [2024-07-25 10:30:54.509705] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.025 [2024-07-25 10:30:54.509719] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.025 [2024-07-25 10:30:54.509728] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.025 [2024-07-25 10:30:54.509735] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.025 [2024-07-25 10:30:54.509781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.025 [2024-07-25 10:30:54.509890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.025 [2024-07-25 10:30:54.509980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.025 [2024-07-25 10:30:54.509982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:51.591 [2024-07-25 10:30:55.227117] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:51.591 Malloc0 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:51.591 Malloc1 00:15:51.591 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.592 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:51.592 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.592 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:51.592 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.592 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:51.592 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.592 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:51.850 [2024-07-25 10:30:55.311524] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:15:51.850 00:15:51.850 Discovery Log Number of Records 2, Generation counter 2 00:15:51.850 =====Discovery Log Entry 0====== 00:15:51.850 trtype: tcp 00:15:51.850 adrfam: ipv4 00:15:51.850 subtype: current discovery subsystem 00:15:51.850 treq: not required 00:15:51.850 portid: 0 00:15:51.850 trsvcid: 4420 00:15:51.850 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:51.850 traddr: 10.0.0.2 00:15:51.850 eflags: explicit discovery connections, duplicate discovery information 00:15:51.850 sectype: none 00:15:51.850 =====Discovery Log Entry 1====== 00:15:51.850 trtype: tcp 00:15:51.850 adrfam: ipv4 00:15:51.850 subtype: nvme subsystem 00:15:51.850 treq: not required 00:15:51.850 portid: 0 00:15:51.850 trsvcid: 4420 00:15:51.850 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:51.850 traddr: 10.0.0.2 00:15:51.850 eflags: none 00:15:51.850 sectype: none 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:51.850 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.237 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:53.237 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:53.237 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:53.237 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:53.237 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:53.237 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:55.136 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:55.136 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:55.136 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:55.136 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:55.136 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:55.136 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:55.136 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:55.136 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:55.136 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:55.136 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:55.394 /dev/nvme0n1 ]] 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:55.394 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:55.394 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:55.394 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:55.394 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:55.394 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:55.394 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:55.394 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:55.394 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:55.394 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:55.394 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:55.394 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:55.394 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:55.394 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:55.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:55.653 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:55.911 rmmod nvme_tcp 00:15:55.911 rmmod nvme_fabrics 00:15:55.911 rmmod nvme_keyring 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3865795 ']' 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3865795 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3865795 ']' 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3865795 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3865795 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3865795' 00:15:55.911 killing process with pid 3865795 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3865795 00:15:55.911 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3865795 00:15:56.170 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:56.170 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:56.170 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:56.170 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.170 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.170 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.170 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.170 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.072 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:58.330 00:15:58.330 real 0m14.576s 00:15:58.330 user 0m22.434s 00:15:58.330 sys 0m6.166s 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.330 ************************************ 00:15:58.330 END TEST nvmf_nvme_cli 00:15:58.330 ************************************ 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:58.330 ************************************ 00:15:58.330 START TEST nvmf_vfio_user 00:15:58.330 ************************************ 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:58.330 * Looking for test storage... 00:15:58.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:58.330 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.331 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3867256 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3867256' 00:15:58.331 Process pid: 3867256 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3867256 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3867256 ']' 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:58.331 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:58.588 [2024-07-25 10:31:02.054657] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:15:58.588 [2024-07-25 10:31:02.054707] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.588 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.588 [2024-07-25 10:31:02.124582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.588 [2024-07-25 10:31:02.201639] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.588 [2024-07-25 10:31:02.201679] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.588 [2024-07-25 10:31:02.201689] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.588 [2024-07-25 10:31:02.201697] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.588 [2024-07-25 10:31:02.201724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.588 [2024-07-25 10:31:02.201768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.588 [2024-07-25 10:31:02.201784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.588 [2024-07-25 10:31:02.201871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.588 [2024-07-25 10:31:02.201874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.520 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:59.520 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:59.520 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:00.508 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:00.508 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:00.508 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:00.508 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:00.508 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:00.508 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:00.767 Malloc1 00:16:00.767 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:00.767 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:01.024 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:01.282 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:01.282 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:01.282 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:01.282 Malloc2 00:16:01.540 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:01.540 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:01.797 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:02.057 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:02.057 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:02.057 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:02.057 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:02.057 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:02.057 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:02.057 [2024-07-25 10:31:05.567740] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:16:02.057 [2024-07-25 10:31:05.567779] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3867819 ] 00:16:02.057 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.057 [2024-07-25 10:31:05.598058] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:02.057 [2024-07-25 10:31:05.606081] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:02.057 [2024-07-25 10:31:05.606101] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fca34c93000 00:16:02.057 [2024-07-25 10:31:05.607076] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:02.057 [2024-07-25 10:31:05.608074] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:02.057 [2024-07-25 10:31:05.609084] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:02.057 [2024-07-25 10:31:05.610090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:02.057 [2024-07-25 10:31:05.611095] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:02.057 [2024-07-25 10:31:05.612104] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:02.057 [2024-07-25 10:31:05.613107] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:02.057 [2024-07-25 10:31:05.614114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:02.057 [2024-07-25 10:31:05.615122] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:02.057 [2024-07-25 10:31:05.615133] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fca34c88000 00:16:02.057 [2024-07-25 10:31:05.616026] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:02.057 [2024-07-25 10:31:05.629337] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:02.057 [2024-07-25 10:31:05.629364] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:02.057 [2024-07-25 10:31:05.632217] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:02.057 [2024-07-25 10:31:05.632256] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:02.057 [2024-07-25 10:31:05.632328] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:02.057 [2024-07-25 10:31:05.632349] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:02.058 [2024-07-25 10:31:05.632356] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:02.058 [2024-07-25 10:31:05.633218] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:02.058 [2024-07-25 10:31:05.633231] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:02.058 [2024-07-25 10:31:05.633240] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:02.058 [2024-07-25 10:31:05.634228] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:02.058 [2024-07-25 10:31:05.634239] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:02.058 [2024-07-25 10:31:05.634248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:02.058 [2024-07-25 10:31:05.635234] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:02.058 [2024-07-25 10:31:05.635243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:02.058 [2024-07-25 10:31:05.636239] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:02.058 [2024-07-25 10:31:05.636249] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:02.058 [2024-07-25 10:31:05.636256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:02.058 [2024-07-25 10:31:05.636264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:02.058 [2024-07-25 10:31:05.636371] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:02.058 [2024-07-25 10:31:05.636377] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:02.058 [2024-07-25 10:31:05.636383] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:02.058 [2024-07-25 10:31:05.637241] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:02.058 [2024-07-25 10:31:05.638247] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:02.058 [2024-07-25 10:31:05.639257] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:02.058 [2024-07-25 10:31:05.640256] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:02.058 [2024-07-25 10:31:05.640326] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:02.058 [2024-07-25 10:31:05.641267] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:02.058 [2024-07-25 10:31:05.641276] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:02.058 [2024-07-25 10:31:05.641282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641303] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:02.058 [2024-07-25 10:31:05.641315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641333] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:02.058 [2024-07-25 10:31:05.641340] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:02.058 [2024-07-25 10:31:05.641345] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:02.058 [2024-07-25 10:31:05.641358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:02.058 [2024-07-25 10:31:05.641400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:02.058 [2024-07-25 10:31:05.641411] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:02.058 [2024-07-25 10:31:05.641417] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:02.058 [2024-07-25 10:31:05.641422] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:02.058 [2024-07-25 10:31:05.641428] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:02.058 [2024-07-25 10:31:05.641434] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:02.058 [2024-07-25 10:31:05.641440] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:02.058 [2024-07-25 10:31:05.641446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641455] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:02.058 [2024-07-25 10:31:05.641482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:02.058 [2024-07-25 10:31:05.641496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.058 [2024-07-25 10:31:05.641505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.058 [2024-07-25 10:31:05.641514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.058 [2024-07-25 10:31:05.641523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.058 [2024-07-25 10:31:05.641529] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641549] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:02.058 [2024-07-25 10:31:05.641557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:02.058 [2024-07-25 10:31:05.641565] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:02.058 [2024-07-25 10:31:05.641572] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:02.058 [2024-07-25 10:31:05.641606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:02.058 [2024-07-25 10:31:05.641656] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641666] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641674] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:02.058 [2024-07-25 10:31:05.641680] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:02.058 [2024-07-25 10:31:05.641685] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:02.058 [2024-07-25 10:31:05.641691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:02.058 [2024-07-25 10:31:05.641706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:02.058 [2024-07-25 10:31:05.641722] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:02.058 [2024-07-25 10:31:05.641733] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641742] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641749] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:02.058 [2024-07-25 10:31:05.641755] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:02.058 [2024-07-25 10:31:05.641760] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:02.058 [2024-07-25 10:31:05.641766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:02.058 [2024-07-25 10:31:05.641790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:02.058 [2024-07-25 10:31:05.641804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641813] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641821] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:02.058 [2024-07-25 10:31:05.641826] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:02.058 [2024-07-25 10:31:05.641831] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:02.058 [2024-07-25 10:31:05.641838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:02.058 [2024-07-25 10:31:05.641851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:02.058 [2024-07-25 10:31:05.641860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:02.058 [2024-07-25 10:31:05.641868] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:02.059 [2024-07-25 10:31:05.641877] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:02.059 [2024-07-25 10:31:05.641886] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:02.059 [2024-07-25 10:31:05.641892] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:02.059 [2024-07-25 10:31:05.641898] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:02.059 [2024-07-25 10:31:05.641904] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:02.059 [2024-07-25 10:31:05.641910] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:02.059 [2024-07-25 10:31:05.641916] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:02.059 [2024-07-25 10:31:05.641934] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:02.059 [2024-07-25 10:31:05.641945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:02.059 [2024-07-25 10:31:05.641958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:02.059 [2024-07-25 10:31:05.641966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:02.059 [2024-07-25 10:31:05.641979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:02.059 [2024-07-25 10:31:05.641993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:02.059 [2024-07-25 10:31:05.642006] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:02.059 [2024-07-25 10:31:05.642014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:02.059 [2024-07-25 10:31:05.642029] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:02.059 [2024-07-25 10:31:05.642035] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:02.059 [2024-07-25 10:31:05.642040] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:02.059 [2024-07-25 10:31:05.642044] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:02.059 [2024-07-25 10:31:05.642049] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:02.059 [2024-07-25 10:31:05.642056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:02.059 [2024-07-25 10:31:05.642064] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:02.059 [2024-07-25 10:31:05.642070] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:02.059 [2024-07-25 10:31:05.642075] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:02.059 [2024-07-25 10:31:05.642082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:02.059 [2024-07-25 10:31:05.642090] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:02.059 [2024-07-25 10:31:05.642096] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:02.059 [2024-07-25 10:31:05.642100] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:02.059 [2024-07-25 10:31:05.642107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:02.059 [2024-07-25 10:31:05.642115] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:02.059 [2024-07-25 10:31:05.642121] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:02.059 [2024-07-25 10:31:05.642125] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:02.059 [2024-07-25 10:31:05.642132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:02.059 [2024-07-25 10:31:05.642139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:02.059 [2024-07-25 10:31:05.642155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:02.059 [2024-07-25 10:31:05.642168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:02.059 [2024-07-25 10:31:05.642177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:02.059 ===================================================== 00:16:02.059 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:02.059 ===================================================== 00:16:02.059 Controller Capabilities/Features 00:16:02.059 ================================ 00:16:02.059 Vendor ID: 4e58 00:16:02.059 Subsystem Vendor ID: 4e58 00:16:02.059 Serial Number: SPDK1 00:16:02.059 Model Number: SPDK bdev Controller 00:16:02.059 Firmware Version: 24.09 00:16:02.059 Recommended Arb Burst: 6 00:16:02.059 IEEE OUI Identifier: 8d 6b 50 00:16:02.059 Multi-path I/O 00:16:02.059 May have multiple subsystem ports: Yes 00:16:02.059 May have multiple controllers: Yes 00:16:02.059 Associated with SR-IOV VF: No 00:16:02.059 Max Data Transfer Size: 131072 00:16:02.059 Max Number of Namespaces: 32 00:16:02.059 Max Number of I/O Queues: 127 00:16:02.059 NVMe Specification Version (VS): 1.3 00:16:02.059 NVMe Specification Version (Identify): 1.3 00:16:02.059 Maximum Queue Entries: 256 00:16:02.059 Contiguous Queues Required: Yes 00:16:02.059 Arbitration Mechanisms Supported 00:16:02.059 Weighted Round Robin: Not Supported 00:16:02.059 Vendor Specific: Not Supported 00:16:02.059 Reset Timeout: 15000 ms 00:16:02.059 Doorbell Stride: 4 bytes 00:16:02.059 NVM Subsystem Reset: Not Supported 00:16:02.059 Command Sets Supported 00:16:02.059 NVM Command Set: Supported 00:16:02.059 Boot Partition: Not Supported 00:16:02.059 Memory Page Size Minimum: 4096 bytes 00:16:02.059 Memory Page Size Maximum: 4096 bytes 00:16:02.059 Persistent Memory Region: Not Supported 00:16:02.059 Optional Asynchronous Events Supported 00:16:02.059 Namespace Attribute Notices: Supported 00:16:02.059 Firmware Activation Notices: Not Supported 00:16:02.059 ANA Change Notices: Not Supported 00:16:02.059 PLE Aggregate Log Change Notices: Not Supported 00:16:02.059 LBA Status Info Alert Notices: Not Supported 00:16:02.059 EGE Aggregate Log Change Notices: Not Supported 00:16:02.059 Normal NVM Subsystem Shutdown event: Not Supported 00:16:02.059 Zone Descriptor Change Notices: Not Supported 00:16:02.059 Discovery Log Change Notices: Not Supported 00:16:02.059 Controller Attributes 00:16:02.059 128-bit Host Identifier: Supported 00:16:02.059 Non-Operational Permissive Mode: Not Supported 00:16:02.059 NVM Sets: Not Supported 00:16:02.059 Read Recovery Levels: Not Supported 00:16:02.059 Endurance Groups: Not Supported 00:16:02.059 Predictable Latency Mode: Not Supported 00:16:02.059 Traffic Based Keep ALive: Not Supported 00:16:02.059 Namespace Granularity: Not Supported 00:16:02.059 SQ Associations: Not Supported 00:16:02.059 UUID List: Not Supported 00:16:02.059 Multi-Domain Subsystem: Not Supported 00:16:02.059 Fixed Capacity Management: Not Supported 00:16:02.059 Variable Capacity Management: Not Supported 00:16:02.059 Delete Endurance Group: Not Supported 00:16:02.059 Delete NVM Set: Not Supported 00:16:02.059 Extended LBA Formats Supported: Not Supported 00:16:02.059 Flexible Data Placement Supported: Not Supported 00:16:02.059 00:16:02.059 Controller Memory Buffer Support 00:16:02.059 ================================ 00:16:02.059 Supported: No 00:16:02.059 00:16:02.059 Persistent Memory Region Support 00:16:02.059 ================================ 00:16:02.059 Supported: No 00:16:02.059 00:16:02.059 Admin Command Set Attributes 00:16:02.059 ============================ 00:16:02.059 Security Send/Receive: Not Supported 00:16:02.059 Format NVM: Not Supported 00:16:02.059 Firmware Activate/Download: Not Supported 00:16:02.059 Namespace Management: Not Supported 00:16:02.059 Device Self-Test: Not Supported 00:16:02.059 Directives: Not Supported 00:16:02.059 NVMe-MI: Not Supported 00:16:02.059 Virtualization Management: Not Supported 00:16:02.059 Doorbell Buffer Config: Not Supported 00:16:02.059 Get LBA Status Capability: Not Supported 00:16:02.059 Command & Feature Lockdown Capability: Not Supported 00:16:02.059 Abort Command Limit: 4 00:16:02.059 Async Event Request Limit: 4 00:16:02.059 Number of Firmware Slots: N/A 00:16:02.059 Firmware Slot 1 Read-Only: N/A 00:16:02.059 Firmware Activation Without Reset: N/A 00:16:02.059 Multiple Update Detection Support: N/A 00:16:02.059 Firmware Update Granularity: No Information Provided 00:16:02.059 Per-Namespace SMART Log: No 00:16:02.059 Asymmetric Namespace Access Log Page: Not Supported 00:16:02.059 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:02.059 Command Effects Log Page: Supported 00:16:02.059 Get Log Page Extended Data: Supported 00:16:02.059 Telemetry Log Pages: Not Supported 00:16:02.059 Persistent Event Log Pages: Not Supported 00:16:02.059 Supported Log Pages Log Page: May Support 00:16:02.060 Commands Supported & Effects Log Page: Not Supported 00:16:02.060 Feature Identifiers & Effects Log Page:May Support 00:16:02.060 NVMe-MI Commands & Effects Log Page: May Support 00:16:02.060 Data Area 4 for Telemetry Log: Not Supported 00:16:02.060 Error Log Page Entries Supported: 128 00:16:02.060 Keep Alive: Supported 00:16:02.060 Keep Alive Granularity: 10000 ms 00:16:02.060 00:16:02.060 NVM Command Set Attributes 00:16:02.060 ========================== 00:16:02.060 Submission Queue Entry Size 00:16:02.060 Max: 64 00:16:02.060 Min: 64 00:16:02.060 Completion Queue Entry Size 00:16:02.060 Max: 16 00:16:02.060 Min: 16 00:16:02.060 Number of Namespaces: 32 00:16:02.060 Compare Command: Supported 00:16:02.060 Write Uncorrectable Command: Not Supported 00:16:02.060 Dataset Management Command: Supported 00:16:02.060 Write Zeroes Command: Supported 00:16:02.060 Set Features Save Field: Not Supported 00:16:02.060 Reservations: Not Supported 00:16:02.060 Timestamp: Not Supported 00:16:02.060 Copy: Supported 00:16:02.060 Volatile Write Cache: Present 00:16:02.060 Atomic Write Unit (Normal): 1 00:16:02.060 Atomic Write Unit (PFail): 1 00:16:02.060 Atomic Compare & Write Unit: 1 00:16:02.060 Fused Compare & Write: Supported 00:16:02.060 Scatter-Gather List 00:16:02.060 SGL Command Set: Supported (Dword aligned) 00:16:02.060 SGL Keyed: Not Supported 00:16:02.060 SGL Bit Bucket Descriptor: Not Supported 00:16:02.060 SGL Metadata Pointer: Not Supported 00:16:02.060 Oversized SGL: Not Supported 00:16:02.060 SGL Metadata Address: Not Supported 00:16:02.060 SGL Offset: Not Supported 00:16:02.060 Transport SGL Data Block: Not Supported 00:16:02.060 Replay Protected Memory Block: Not Supported 00:16:02.060 00:16:02.060 Firmware Slot Information 00:16:02.060 ========================= 00:16:02.060 Active slot: 1 00:16:02.060 Slot 1 Firmware Revision: 24.09 00:16:02.060 00:16:02.060 00:16:02.060 Commands Supported and Effects 00:16:02.060 ============================== 00:16:02.060 Admin Commands 00:16:02.060 -------------- 00:16:02.060 Get Log Page (02h): Supported 00:16:02.060 Identify (06h): Supported 00:16:02.060 Abort (08h): Supported 00:16:02.060 Set Features (09h): Supported 00:16:02.060 Get Features (0Ah): Supported 00:16:02.060 Asynchronous Event Request (0Ch): Supported 00:16:02.060 Keep Alive (18h): Supported 00:16:02.060 I/O Commands 00:16:02.060 ------------ 00:16:02.060 Flush (00h): Supported LBA-Change 00:16:02.060 Write (01h): Supported LBA-Change 00:16:02.060 Read (02h): Supported 00:16:02.060 Compare (05h): Supported 00:16:02.060 Write Zeroes (08h): Supported LBA-Change 00:16:02.060 Dataset Management (09h): Supported LBA-Change 00:16:02.060 Copy (19h): Supported LBA-Change 00:16:02.060 00:16:02.060 Error Log 00:16:02.060 ========= 00:16:02.060 00:16:02.060 Arbitration 00:16:02.060 =========== 00:16:02.060 Arbitration Burst: 1 00:16:02.060 00:16:02.060 Power Management 00:16:02.060 ================ 00:16:02.060 Number of Power States: 1 00:16:02.060 Current Power State: Power State #0 00:16:02.060 Power State #0: 00:16:02.060 Max Power: 0.00 W 00:16:02.060 Non-Operational State: Operational 00:16:02.060 Entry Latency: Not Reported 00:16:02.060 Exit Latency: Not Reported 00:16:02.060 Relative Read Throughput: 0 00:16:02.060 Relative Read Latency: 0 00:16:02.060 Relative Write Throughput: 0 00:16:02.060 Relative Write Latency: 0 00:16:02.060 Idle Power: Not Reported 00:16:02.060 Active Power: Not Reported 00:16:02.060 Non-Operational Permissive Mode: Not Supported 00:16:02.060 00:16:02.060 Health Information 00:16:02.060 ================== 00:16:02.060 Critical Warnings: 00:16:02.060 Available Spare Space: OK 00:16:02.060 Temperature: OK 00:16:02.060 Device Reliability: OK 00:16:02.060 Read Only: No 00:16:02.060 Volatile Memory Backup: OK 00:16:02.060 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:02.060 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:02.060 Available Spare: 0% 00:16:02.060 Available Sp[2024-07-25 10:31:05.642265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:02.060 [2024-07-25 10:31:05.642277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:02.060 [2024-07-25 10:31:05.642305] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:02.060 [2024-07-25 10:31:05.642316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.060 [2024-07-25 10:31:05.642324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.060 [2024-07-25 10:31:05.642332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.060 [2024-07-25 10:31:05.642339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.060 [2024-07-25 10:31:05.644723] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:02.060 [2024-07-25 10:31:05.644736] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:02.060 [2024-07-25 10:31:05.645281] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:02.060 [2024-07-25 10:31:05.645332] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:02.060 [2024-07-25 10:31:05.645339] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:02.060 [2024-07-25 10:31:05.646289] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:02.060 [2024-07-25 10:31:05.646303] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:02.060 [2024-07-25 10:31:05.646353] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:02.060 [2024-07-25 10:31:05.649722] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:02.060 are Threshold: 0% 00:16:02.060 Life Percentage Used: 0% 00:16:02.060 Data Units Read: 0 00:16:02.060 Data Units Written: 0 00:16:02.060 Host Read Commands: 0 00:16:02.060 Host Write Commands: 0 00:16:02.060 Controller Busy Time: 0 minutes 00:16:02.060 Power Cycles: 0 00:16:02.060 Power On Hours: 0 hours 00:16:02.060 Unsafe Shutdowns: 0 00:16:02.060 Unrecoverable Media Errors: 0 00:16:02.060 Lifetime Error Log Entries: 0 00:16:02.060 Warning Temperature Time: 0 minutes 00:16:02.060 Critical Temperature Time: 0 minutes 00:16:02.060 00:16:02.060 Number of Queues 00:16:02.060 ================ 00:16:02.060 Number of I/O Submission Queues: 127 00:16:02.060 Number of I/O Completion Queues: 127 00:16:02.060 00:16:02.060 Active Namespaces 00:16:02.060 ================= 00:16:02.060 Namespace ID:1 00:16:02.060 Error Recovery Timeout: Unlimited 00:16:02.060 Command Set Identifier: NVM (00h) 00:16:02.060 Deallocate: Supported 00:16:02.060 Deallocated/Unwritten Error: Not Supported 00:16:02.060 Deallocated Read Value: Unknown 00:16:02.060 Deallocate in Write Zeroes: Not Supported 00:16:02.060 Deallocated Guard Field: 0xFFFF 00:16:02.060 Flush: Supported 00:16:02.060 Reservation: Supported 00:16:02.060 Namespace Sharing Capabilities: Multiple Controllers 00:16:02.060 Size (in LBAs): 131072 (0GiB) 00:16:02.060 Capacity (in LBAs): 131072 (0GiB) 00:16:02.060 Utilization (in LBAs): 131072 (0GiB) 00:16:02.060 NGUID: F0EA531497AB4D77B9AA3EA52737EB34 00:16:02.060 UUID: f0ea5314-97ab-4d77-b9aa-3ea52737eb34 00:16:02.060 Thin Provisioning: Not Supported 00:16:02.060 Per-NS Atomic Units: Yes 00:16:02.060 Atomic Boundary Size (Normal): 0 00:16:02.060 Atomic Boundary Size (PFail): 0 00:16:02.060 Atomic Boundary Offset: 0 00:16:02.060 Maximum Single Source Range Length: 65535 00:16:02.060 Maximum Copy Length: 65535 00:16:02.060 Maximum Source Range Count: 1 00:16:02.060 NGUID/EUI64 Never Reused: No 00:16:02.060 Namespace Write Protected: No 00:16:02.060 Number of LBA Formats: 1 00:16:02.060 Current LBA Format: LBA Format #00 00:16:02.060 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:02.060 00:16:02.060 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:02.060 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.318 [2024-07-25 10:31:05.868500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:07.581 Initializing NVMe Controllers 00:16:07.581 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:07.581 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:07.581 Initialization complete. Launching workers. 00:16:07.581 ======================================================== 00:16:07.581 Latency(us) 00:16:07.581 Device Information : IOPS MiB/s Average min max 00:16:07.581 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39984.55 156.19 3203.69 905.03 10661.19 00:16:07.581 ======================================================== 00:16:07.581 Total : 39984.55 156.19 3203.69 905.03 10661.19 00:16:07.581 00:16:07.581 [2024-07-25 10:31:10.892995] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:07.581 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:07.581 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.581 [2024-07-25 10:31:11.113996] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:12.866 Initializing NVMe Controllers 00:16:12.866 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:12.866 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:12.866 Initialization complete. Launching workers. 00:16:12.866 ======================================================== 00:16:12.866 Latency(us) 00:16:12.866 Device Information : IOPS MiB/s Average min max 00:16:12.866 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15986.87 62.45 8011.96 7781.92 11974.98 00:16:12.866 ======================================================== 00:16:12.866 Total : 15986.87 62.45 8011.96 7781.92 11974.98 00:16:12.866 00:16:12.866 [2024-07-25 10:31:16.152740] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:12.866 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:12.866 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.866 [2024-07-25 10:31:16.363703] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:18.131 [2024-07-25 10:31:21.428995] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:18.131 Initializing NVMe Controllers 00:16:18.131 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:18.131 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:18.131 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:18.131 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:18.131 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:18.131 Initialization complete. Launching workers. 00:16:18.131 Starting thread on core 2 00:16:18.131 Starting thread on core 3 00:16:18.131 Starting thread on core 1 00:16:18.131 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:18.131 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.131 [2024-07-25 10:31:21.733101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:21.416 [2024-07-25 10:31:24.803629] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:21.416 Initializing NVMe Controllers 00:16:21.416 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:21.416 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:21.416 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:21.416 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:21.416 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:21.416 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:21.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:21.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:21.416 Initialization complete. Launching workers. 00:16:21.416 Starting thread on core 1 with urgent priority queue 00:16:21.416 Starting thread on core 2 with urgent priority queue 00:16:21.416 Starting thread on core 3 with urgent priority queue 00:16:21.416 Starting thread on core 0 with urgent priority queue 00:16:21.416 SPDK bdev Controller (SPDK1 ) core 0: 8165.67 IO/s 12.25 secs/100000 ios 00:16:21.416 SPDK bdev Controller (SPDK1 ) core 1: 9601.67 IO/s 10.41 secs/100000 ios 00:16:21.416 SPDK bdev Controller (SPDK1 ) core 2: 10954.33 IO/s 9.13 secs/100000 ios 00:16:21.416 SPDK bdev Controller (SPDK1 ) core 3: 8204.67 IO/s 12.19 secs/100000 ios 00:16:21.416 ======================================================== 00:16:21.416 00:16:21.416 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:21.416 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.416 [2024-07-25 10:31:25.103160] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:21.674 Initializing NVMe Controllers 00:16:21.674 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:21.674 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:21.674 Namespace ID: 1 size: 0GB 00:16:21.674 Initialization complete. 00:16:21.674 INFO: using host memory buffer for IO 00:16:21.674 Hello world! 00:16:21.674 [2024-07-25 10:31:25.139521] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:21.674 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:21.674 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.932 [2024-07-25 10:31:25.422106] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:22.866 Initializing NVMe Controllers 00:16:22.866 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:22.866 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:22.866 Initialization complete. Launching workers. 00:16:22.866 submit (in ns) avg, min, max = 6412.6, 3056.0, 4209852.0 00:16:22.866 complete (in ns) avg, min, max = 22608.2, 1711.2, 3998584.0 00:16:22.866 00:16:22.866 Submit histogram 00:16:22.866 ================ 00:16:22.866 Range in us Cumulative Count 00:16:22.866 3.046 - 3.059: 0.0060% ( 1) 00:16:22.866 3.059 - 3.072: 0.0240% ( 3) 00:16:22.866 3.072 - 3.085: 0.1618% ( 23) 00:16:22.866 3.085 - 3.098: 0.4015% ( 40) 00:16:22.866 3.098 - 3.110: 1.2105% ( 135) 00:16:22.866 3.110 - 3.123: 2.6608% ( 242) 00:16:22.866 3.123 - 3.136: 4.8601% ( 367) 00:16:22.866 3.136 - 3.149: 7.6766% ( 470) 00:16:22.866 3.149 - 3.162: 11.0086% ( 556) 00:16:22.866 3.162 - 3.174: 15.8567% ( 809) 00:16:22.866 3.174 - 3.187: 20.9804% ( 855) 00:16:22.866 3.187 - 3.200: 26.6255% ( 942) 00:16:22.866 3.200 - 3.213: 33.0677% ( 1075) 00:16:22.866 3.213 - 3.226: 40.2769% ( 1203) 00:16:22.866 3.226 - 3.238: 46.4194% ( 1025) 00:16:22.866 3.238 - 3.251: 51.2015% ( 798) 00:16:22.866 3.251 - 3.264: 55.6541% ( 743) 00:16:22.866 3.264 - 3.277: 60.1187% ( 745) 00:16:22.866 3.277 - 3.302: 67.3818% ( 1212) 00:16:22.866 3.302 - 3.328: 73.8299% ( 1076) 00:16:22.866 3.328 - 3.354: 81.3328% ( 1252) 00:16:22.866 3.354 - 3.379: 85.6235% ( 716) 00:16:22.866 3.379 - 3.405: 87.4513% ( 305) 00:16:22.866 3.405 - 3.430: 88.3023% ( 142) 00:16:22.866 3.430 - 3.456: 89.0633% ( 127) 00:16:22.866 3.456 - 3.482: 90.2379% ( 196) 00:16:22.866 3.482 - 3.507: 91.7840% ( 258) 00:16:22.866 3.507 - 3.533: 93.5159% ( 289) 00:16:22.866 3.533 - 3.558: 95.0021% ( 248) 00:16:22.866 3.558 - 3.584: 96.3984% ( 233) 00:16:22.866 3.584 - 3.610: 97.7647% ( 228) 00:16:22.866 3.610 - 3.635: 98.5737% ( 135) 00:16:22.866 3.635 - 3.661: 99.0891% ( 86) 00:16:22.866 3.661 - 3.686: 99.4007% ( 52) 00:16:22.866 3.686 - 3.712: 99.5865% ( 31) 00:16:22.866 3.712 - 3.738: 99.6464% ( 10) 00:16:22.866 3.738 - 3.763: 99.6764% ( 5) 00:16:22.866 3.763 - 3.789: 99.6884% ( 2) 00:16:22.866 3.891 - 3.917: 99.6944% ( 1) 00:16:22.866 6.144 - 6.170: 99.7004% ( 1) 00:16:22.866 6.349 - 6.374: 99.7064% ( 1) 00:16:22.866 6.426 - 6.451: 99.7124% ( 1) 00:16:22.866 6.477 - 6.502: 99.7183% ( 1) 00:16:22.866 6.502 - 6.528: 99.7243% ( 1) 00:16:22.866 6.528 - 6.554: 99.7363% ( 2) 00:16:22.866 6.554 - 6.605: 99.7423% ( 1) 00:16:22.866 6.605 - 6.656: 99.7543% ( 2) 00:16:22.866 6.656 - 6.707: 99.7603% ( 1) 00:16:22.866 6.707 - 6.758: 99.7723% ( 2) 00:16:22.866 6.758 - 6.810: 99.7783% ( 1) 00:16:22.866 6.810 - 6.861: 99.7903% ( 2) 00:16:22.866 6.861 - 6.912: 99.7962% ( 1) 00:16:22.866 6.912 - 6.963: 99.8022% ( 1) 00:16:22.866 6.963 - 7.014: 99.8082% ( 1) 00:16:22.866 7.014 - 7.066: 99.8202% ( 2) 00:16:22.866 7.066 - 7.117: 99.8322% ( 2) 00:16:22.866 7.168 - 7.219: 99.8382% ( 1) 00:16:22.866 7.219 - 7.270: 99.8442% ( 1) 00:16:22.866 7.270 - 7.322: 99.8562% ( 2) 00:16:22.866 7.322 - 7.373: 99.8622% ( 1) 00:16:22.866 7.526 - 7.578: 99.8801% ( 3) 00:16:22.866 7.782 - 7.834: 99.8861% ( 1) 00:16:22.866 7.885 - 7.936: 99.8921% ( 1) 00:16:22.866 7.987 - 8.038: 99.8981% ( 1) 00:16:22.866 8.448 - 8.499: 99.9041% ( 1) 00:16:22.866 9.626 - 9.677: 99.9101% ( 1) 00:16:22.866 11.622 - 11.674: 99.9161% ( 1) 00:16:22.866 15.462 - 15.565: 99.9221% ( 1) 00:16:22.866 3984.589 - 4010.803: 99.9940% ( 12) 00:16:22.866 4194.304 - 4220.518: 100.0000% ( 1) 00:16:22.866 00:16:22.866 Complete histogram 00:16:22.866 ================== 00:16:22.866 Range in us Cumulative Count 00:16:22.866 1.702 - 1.715: 0.0839% ( 14) 00:16:22.866 1.715 - 1.728: 7.1493% ( 1179) 00:16:22.866 1.728 - 1.741: 38.9165% ( 5301) 00:16:22.866 1.741 - 1.754: 48.1572% ( 1542) 00:16:22.866 1.754 - 1.766: 50.4285% ( 379) 00:16:22.866 1.766 - 1.779: 62.4918% ( 2013) 00:16:22.866 1.779 - 1.792: 87.6071% ( 4191) 00:16:22.866 1.792 - [2024-07-25 10:31:26.443165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:22.866 1.805: 95.1939% ( 1266) 00:16:22.866 1.805 - 1.818: 97.5250% ( 389) 00:16:22.866 1.818 - 1.830: 98.3580% ( 139) 00:16:22.866 1.830 - 1.843: 98.5737% ( 36) 00:16:22.866 1.843 - 1.856: 98.8614% ( 48) 00:16:22.866 1.856 - 1.869: 99.0711% ( 35) 00:16:22.866 1.869 - 1.882: 99.1910% ( 20) 00:16:22.866 1.882 - 1.894: 99.2329% ( 7) 00:16:22.866 1.894 - 1.907: 99.2629% ( 5) 00:16:22.866 1.907 - 1.920: 99.2749% ( 2) 00:16:22.866 1.946 - 1.958: 99.2869% ( 2) 00:16:22.866 1.984 - 1.997: 99.2929% ( 1) 00:16:22.866 2.240 - 2.253: 99.2989% ( 1) 00:16:22.866 4.403 - 4.429: 99.3048% ( 1) 00:16:22.866 4.634 - 4.659: 99.3108% ( 1) 00:16:22.866 4.762 - 4.787: 99.3168% ( 1) 00:16:22.866 4.864 - 4.890: 99.3228% ( 1) 00:16:22.866 4.915 - 4.941: 99.3288% ( 1) 00:16:22.866 4.941 - 4.966: 99.3348% ( 1) 00:16:22.866 5.120 - 5.146: 99.3408% ( 1) 00:16:22.866 5.248 - 5.274: 99.3468% ( 1) 00:16:22.866 5.299 - 5.325: 99.3588% ( 2) 00:16:22.866 5.427 - 5.453: 99.3708% ( 2) 00:16:22.866 5.453 - 5.478: 99.3768% ( 1) 00:16:22.866 5.478 - 5.504: 99.3828% ( 1) 00:16:22.866 5.530 - 5.555: 99.3887% ( 1) 00:16:22.866 5.555 - 5.581: 99.3947% ( 1) 00:16:22.866 5.581 - 5.606: 99.4067% ( 2) 00:16:22.866 5.606 - 5.632: 99.4127% ( 1) 00:16:22.866 5.709 - 5.734: 99.4187% ( 1) 00:16:22.866 5.837 - 5.862: 99.4247% ( 1) 00:16:22.866 5.862 - 5.888: 99.4307% ( 1) 00:16:22.866 5.914 - 5.939: 99.4367% ( 1) 00:16:22.866 6.093 - 6.118: 99.4427% ( 1) 00:16:22.866 6.221 - 6.246: 99.4487% ( 1) 00:16:22.866 6.298 - 6.323: 99.4547% ( 1) 00:16:22.866 6.451 - 6.477: 99.4607% ( 1) 00:16:22.866 10.547 - 10.598: 99.4667% ( 1) 00:16:22.866 14.234 - 14.336: 99.4726% ( 1) 00:16:22.866 188.416 - 189.235: 99.4786% ( 1) 00:16:22.866 3984.589 - 4010.803: 100.0000% ( 87) 00:16:22.866 00:16:22.867 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:22.867 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:22.867 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:22.867 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:22.867 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:23.125 [ 00:16:23.125 { 00:16:23.125 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:23.125 "subtype": "Discovery", 00:16:23.125 "listen_addresses": [], 00:16:23.125 "allow_any_host": true, 00:16:23.125 "hosts": [] 00:16:23.125 }, 00:16:23.125 { 00:16:23.125 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:23.125 "subtype": "NVMe", 00:16:23.125 "listen_addresses": [ 00:16:23.125 { 00:16:23.125 "trtype": "VFIOUSER", 00:16:23.125 "adrfam": "IPv4", 00:16:23.125 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:23.125 "trsvcid": "0" 00:16:23.125 } 00:16:23.125 ], 00:16:23.125 "allow_any_host": true, 00:16:23.125 "hosts": [], 00:16:23.125 "serial_number": "SPDK1", 00:16:23.125 "model_number": "SPDK bdev Controller", 00:16:23.125 "max_namespaces": 32, 00:16:23.125 "min_cntlid": 1, 00:16:23.125 "max_cntlid": 65519, 00:16:23.125 "namespaces": [ 00:16:23.125 { 00:16:23.125 "nsid": 1, 00:16:23.125 "bdev_name": "Malloc1", 00:16:23.125 "name": "Malloc1", 00:16:23.125 "nguid": "F0EA531497AB4D77B9AA3EA52737EB34", 00:16:23.125 "uuid": "f0ea5314-97ab-4d77-b9aa-3ea52737eb34" 00:16:23.125 } 00:16:23.125 ] 00:16:23.125 }, 00:16:23.125 { 00:16:23.125 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:23.125 "subtype": "NVMe", 00:16:23.125 "listen_addresses": [ 00:16:23.125 { 00:16:23.125 "trtype": "VFIOUSER", 00:16:23.125 "adrfam": "IPv4", 00:16:23.125 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:23.125 "trsvcid": "0" 00:16:23.125 } 00:16:23.125 ], 00:16:23.125 "allow_any_host": true, 00:16:23.125 "hosts": [], 00:16:23.125 "serial_number": "SPDK2", 00:16:23.125 "model_number": "SPDK bdev Controller", 00:16:23.125 "max_namespaces": 32, 00:16:23.125 "min_cntlid": 1, 00:16:23.125 "max_cntlid": 65519, 00:16:23.125 "namespaces": [ 00:16:23.125 { 00:16:23.125 "nsid": 1, 00:16:23.125 "bdev_name": "Malloc2", 00:16:23.125 "name": "Malloc2", 00:16:23.125 "nguid": "829CE371D851472390B7A4F7800DEE8B", 00:16:23.125 "uuid": "829ce371-d851-4723-90b7-a4f7800dee8b" 00:16:23.125 } 00:16:23.125 ] 00:16:23.125 } 00:16:23.125 ] 00:16:23.125 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:23.125 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3871494 00:16:23.125 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:23.125 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:23.125 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:23.125 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:23.125 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:23.125 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:23.125 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:23.125 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:23.125 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.383 [2024-07-25 10:31:26.838114] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:23.383 Malloc3 00:16:23.383 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:23.383 [2024-07-25 10:31:27.040534] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:23.383 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:23.383 Asynchronous Event Request test 00:16:23.383 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:23.383 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:23.383 Registering asynchronous event callbacks... 00:16:23.383 Starting namespace attribute notice tests for all controllers... 00:16:23.383 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:23.383 aer_cb - Changed Namespace 00:16:23.383 Cleaning up... 00:16:23.642 [ 00:16:23.642 { 00:16:23.642 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:23.642 "subtype": "Discovery", 00:16:23.642 "listen_addresses": [], 00:16:23.642 "allow_any_host": true, 00:16:23.642 "hosts": [] 00:16:23.642 }, 00:16:23.642 { 00:16:23.642 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:23.642 "subtype": "NVMe", 00:16:23.642 "listen_addresses": [ 00:16:23.642 { 00:16:23.642 "trtype": "VFIOUSER", 00:16:23.642 "adrfam": "IPv4", 00:16:23.642 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:23.642 "trsvcid": "0" 00:16:23.642 } 00:16:23.642 ], 00:16:23.642 "allow_any_host": true, 00:16:23.642 "hosts": [], 00:16:23.642 "serial_number": "SPDK1", 00:16:23.642 "model_number": "SPDK bdev Controller", 00:16:23.642 "max_namespaces": 32, 00:16:23.642 "min_cntlid": 1, 00:16:23.642 "max_cntlid": 65519, 00:16:23.642 "namespaces": [ 00:16:23.642 { 00:16:23.642 "nsid": 1, 00:16:23.642 "bdev_name": "Malloc1", 00:16:23.642 "name": "Malloc1", 00:16:23.642 "nguid": "F0EA531497AB4D77B9AA3EA52737EB34", 00:16:23.642 "uuid": "f0ea5314-97ab-4d77-b9aa-3ea52737eb34" 00:16:23.642 }, 00:16:23.642 { 00:16:23.642 "nsid": 2, 00:16:23.642 "bdev_name": "Malloc3", 00:16:23.642 "name": "Malloc3", 00:16:23.642 "nguid": "48FCA288ABC0431CBF3B9DA6FAD7F079", 00:16:23.642 "uuid": "48fca288-abc0-431c-bf3b-9da6fad7f079" 00:16:23.642 } 00:16:23.642 ] 00:16:23.642 }, 00:16:23.642 { 00:16:23.642 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:23.642 "subtype": "NVMe", 00:16:23.642 "listen_addresses": [ 00:16:23.642 { 00:16:23.642 "trtype": "VFIOUSER", 00:16:23.642 "adrfam": "IPv4", 00:16:23.642 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:23.642 "trsvcid": "0" 00:16:23.642 } 00:16:23.642 ], 00:16:23.642 "allow_any_host": true, 00:16:23.642 "hosts": [], 00:16:23.642 "serial_number": "SPDK2", 00:16:23.642 "model_number": "SPDK bdev Controller", 00:16:23.642 "max_namespaces": 32, 00:16:23.642 "min_cntlid": 1, 00:16:23.642 "max_cntlid": 65519, 00:16:23.642 "namespaces": [ 00:16:23.642 { 00:16:23.642 "nsid": 1, 00:16:23.642 "bdev_name": "Malloc2", 00:16:23.642 "name": "Malloc2", 00:16:23.642 "nguid": "829CE371D851472390B7A4F7800DEE8B", 00:16:23.642 "uuid": "829ce371-d851-4723-90b7-a4f7800dee8b" 00:16:23.642 } 00:16:23.642 ] 00:16:23.642 } 00:16:23.642 ] 00:16:23.642 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3871494 00:16:23.642 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:23.642 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:23.642 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:23.642 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:23.642 [2024-07-25 10:31:27.256442] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:16:23.642 [2024-07-25 10:31:27.256473] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3871515 ] 00:16:23.642 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.642 [2024-07-25 10:31:27.284900] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:23.642 [2024-07-25 10:31:27.294961] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:23.642 [2024-07-25 10:31:27.294982] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0e2d8e5000 00:16:23.643 [2024-07-25 10:31:27.295958] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:23.643 [2024-07-25 10:31:27.296964] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:23.643 [2024-07-25 10:31:27.297975] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:23.643 [2024-07-25 10:31:27.298976] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:23.643 [2024-07-25 10:31:27.299983] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:23.643 [2024-07-25 10:31:27.300985] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:23.643 [2024-07-25 10:31:27.301991] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:23.643 [2024-07-25 10:31:27.302999] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:23.643 [2024-07-25 10:31:27.304014] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:23.643 [2024-07-25 10:31:27.304026] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0e2d8da000 00:16:23.643 [2024-07-25 10:31:27.304917] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:23.643 [2024-07-25 10:31:27.314862] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:23.643 [2024-07-25 10:31:27.314886] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:23.643 [2024-07-25 10:31:27.321989] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:23.643 [2024-07-25 10:31:27.322027] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:23.643 [2024-07-25 10:31:27.322093] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:23.643 [2024-07-25 10:31:27.322110] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:23.643 [2024-07-25 10:31:27.322116] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:23.643 [2024-07-25 10:31:27.322985] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:23.643 [2024-07-25 10:31:27.322999] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:23.643 [2024-07-25 10:31:27.323008] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:23.643 [2024-07-25 10:31:27.323999] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:23.643 [2024-07-25 10:31:27.324009] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:23.643 [2024-07-25 10:31:27.324018] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:23.643 [2024-07-25 10:31:27.325010] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:23.643 [2024-07-25 10:31:27.325021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:23.643 [2024-07-25 10:31:27.326013] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:23.643 [2024-07-25 10:31:27.326023] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:23.643 [2024-07-25 10:31:27.326032] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:23.643 [2024-07-25 10:31:27.326041] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:23.643 [2024-07-25 10:31:27.326148] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:23.643 [2024-07-25 10:31:27.326154] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:23.643 [2024-07-25 10:31:27.326160] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:23.643 [2024-07-25 10:31:27.327028] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:23.643 [2024-07-25 10:31:27.328033] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:23.643 [2024-07-25 10:31:27.329042] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:23.643 [2024-07-25 10:31:27.330044] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:23.643 [2024-07-25 10:31:27.330084] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:23.643 [2024-07-25 10:31:27.331051] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:23.643 [2024-07-25 10:31:27.331062] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:23.643 [2024-07-25 10:31:27.331068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:23.643 [2024-07-25 10:31:27.331087] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:23.643 [2024-07-25 10:31:27.331100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:23.643 [2024-07-25 10:31:27.331113] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:23.643 [2024-07-25 10:31:27.331120] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:23.643 [2024-07-25 10:31:27.331124] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:23.643 [2024-07-25 10:31:27.331137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:23.643 [2024-07-25 10:31:27.338724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:23.643 [2024-07-25 10:31:27.338737] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:23.643 [2024-07-25 10:31:27.338743] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:23.643 [2024-07-25 10:31:27.338749] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:23.643 [2024-07-25 10:31:27.338755] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:23.643 [2024-07-25 10:31:27.338761] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:23.643 [2024-07-25 10:31:27.338770] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:23.643 [2024-07-25 10:31:27.338776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:23.643 [2024-07-25 10:31:27.338785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:23.643 [2024-07-25 10:31:27.338798] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:23.903 [2024-07-25 10:31:27.346722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:23.903 [2024-07-25 10:31:27.346737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.903 [2024-07-25 10:31:27.346746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.903 [2024-07-25 10:31:27.346756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.903 [2024-07-25 10:31:27.346765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.903 [2024-07-25 10:31:27.346771] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:23.903 [2024-07-25 10:31:27.346781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:23.903 [2024-07-25 10:31:27.346791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:23.903 [2024-07-25 10:31:27.354721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:23.903 [2024-07-25 10:31:27.354730] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:23.903 [2024-07-25 10:31:27.354737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:23.903 [2024-07-25 10:31:27.354747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:23.903 [2024-07-25 10:31:27.354754] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:23.903 [2024-07-25 10:31:27.354764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:23.903 [2024-07-25 10:31:27.362722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:23.903 [2024-07-25 10:31:27.362775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:23.903 [2024-07-25 10:31:27.362785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:23.903 [2024-07-25 10:31:27.362794] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:23.903 [2024-07-25 10:31:27.362800] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:23.903 [2024-07-25 10:31:27.362804] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:23.903 [2024-07-25 10:31:27.362812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:23.903 [2024-07-25 10:31:27.370720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:23.903 [2024-07-25 10:31:27.370735] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:23.903 [2024-07-25 10:31:27.370747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:23.903 [2024-07-25 10:31:27.370756] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:23.903 [2024-07-25 10:31:27.370764] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:23.903 [2024-07-25 10:31:27.370770] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:23.903 [2024-07-25 10:31:27.370775] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:23.903 [2024-07-25 10:31:27.370782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:23.903 [2024-07-25 10:31:27.378722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:23.903 [2024-07-25 10:31:27.378738] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:23.903 [2024-07-25 10:31:27.378747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:23.904 [2024-07-25 10:31:27.378755] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:23.904 [2024-07-25 10:31:27.378761] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:23.904 [2024-07-25 10:31:27.378765] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:23.904 [2024-07-25 10:31:27.378772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:23.904 [2024-07-25 10:31:27.386721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:23.904 [2024-07-25 10:31:27.386732] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:23.904 [2024-07-25 10:31:27.386740] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:23.904 [2024-07-25 10:31:27.386749] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:23.904 [2024-07-25 10:31:27.386758] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:16:23.904 [2024-07-25 10:31:27.386765] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:23.904 [2024-07-25 10:31:27.386771] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:23.904 [2024-07-25 10:31:27.386777] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:23.904 [2024-07-25 10:31:27.386783] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:23.904 [2024-07-25 10:31:27.386789] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:23.904 [2024-07-25 10:31:27.386807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:23.904 [2024-07-25 10:31:27.394721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:23.904 [2024-07-25 10:31:27.394737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:23.904 [2024-07-25 10:31:27.402721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:23.904 [2024-07-25 10:31:27.402736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:23.904 [2024-07-25 10:31:27.410722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:23.904 [2024-07-25 10:31:27.410737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:23.904 [2024-07-25 10:31:27.418720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:23.904 [2024-07-25 10:31:27.418738] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:23.904 [2024-07-25 10:31:27.418745] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:23.904 [2024-07-25 10:31:27.418749] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:23.904 [2024-07-25 10:31:27.418754] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:23.904 [2024-07-25 10:31:27.418758] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:23.904 [2024-07-25 10:31:27.418765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:23.904 [2024-07-25 10:31:27.418774] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:23.904 [2024-07-25 10:31:27.418780] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:23.904 [2024-07-25 10:31:27.418784] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:23.904 [2024-07-25 10:31:27.418791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:23.904 [2024-07-25 10:31:27.418799] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:23.904 [2024-07-25 10:31:27.418805] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:23.904 [2024-07-25 10:31:27.418809] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:23.904 [2024-07-25 10:31:27.418816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:23.904 [2024-07-25 10:31:27.418824] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:23.904 [2024-07-25 10:31:27.418830] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:23.904 [2024-07-25 10:31:27.418834] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:23.904 [2024-07-25 10:31:27.418841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:23.904 [2024-07-25 10:31:27.426722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:23.904 [2024-07-25 10:31:27.426738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:23.904 [2024-07-25 10:31:27.426751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:23.904 [2024-07-25 10:31:27.426760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:23.904 ===================================================== 00:16:23.904 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:23.904 ===================================================== 00:16:23.904 Controller Capabilities/Features 00:16:23.904 ================================ 00:16:23.904 Vendor ID: 4e58 00:16:23.904 Subsystem Vendor ID: 4e58 00:16:23.904 Serial Number: SPDK2 00:16:23.904 Model Number: SPDK bdev Controller 00:16:23.904 Firmware Version: 24.09 00:16:23.904 Recommended Arb Burst: 6 00:16:23.904 IEEE OUI Identifier: 8d 6b 50 00:16:23.904 Multi-path I/O 00:16:23.904 May have multiple subsystem ports: Yes 00:16:23.904 May have multiple controllers: Yes 00:16:23.904 Associated with SR-IOV VF: No 00:16:23.904 Max Data Transfer Size: 131072 00:16:23.904 Max Number of Namespaces: 32 00:16:23.904 Max Number of I/O Queues: 127 00:16:23.904 NVMe Specification Version (VS): 1.3 00:16:23.904 NVMe Specification Version (Identify): 1.3 00:16:23.904 Maximum Queue Entries: 256 00:16:23.904 Contiguous Queues Required: Yes 00:16:23.904 Arbitration Mechanisms Supported 00:16:23.904 Weighted Round Robin: Not Supported 00:16:23.904 Vendor Specific: Not Supported 00:16:23.904 Reset Timeout: 15000 ms 00:16:23.904 Doorbell Stride: 4 bytes 00:16:23.904 NVM Subsystem Reset: Not Supported 00:16:23.904 Command Sets Supported 00:16:23.904 NVM Command Set: Supported 00:16:23.904 Boot Partition: Not Supported 00:16:23.904 Memory Page Size Minimum: 4096 bytes 00:16:23.904 Memory Page Size Maximum: 4096 bytes 00:16:23.904 Persistent Memory Region: Not Supported 00:16:23.904 Optional Asynchronous Events Supported 00:16:23.904 Namespace Attribute Notices: Supported 00:16:23.904 Firmware Activation Notices: Not Supported 00:16:23.904 ANA Change Notices: Not Supported 00:16:23.904 PLE Aggregate Log Change Notices: Not Supported 00:16:23.904 LBA Status Info Alert Notices: Not Supported 00:16:23.904 EGE Aggregate Log Change Notices: Not Supported 00:16:23.904 Normal NVM Subsystem Shutdown event: Not Supported 00:16:23.904 Zone Descriptor Change Notices: Not Supported 00:16:23.904 Discovery Log Change Notices: Not Supported 00:16:23.904 Controller Attributes 00:16:23.904 128-bit Host Identifier: Supported 00:16:23.904 Non-Operational Permissive Mode: Not Supported 00:16:23.904 NVM Sets: Not Supported 00:16:23.904 Read Recovery Levels: Not Supported 00:16:23.904 Endurance Groups: Not Supported 00:16:23.904 Predictable Latency Mode: Not Supported 00:16:23.904 Traffic Based Keep ALive: Not Supported 00:16:23.904 Namespace Granularity: Not Supported 00:16:23.904 SQ Associations: Not Supported 00:16:23.904 UUID List: Not Supported 00:16:23.904 Multi-Domain Subsystem: Not Supported 00:16:23.904 Fixed Capacity Management: Not Supported 00:16:23.904 Variable Capacity Management: Not Supported 00:16:23.904 Delete Endurance Group: Not Supported 00:16:23.904 Delete NVM Set: Not Supported 00:16:23.904 Extended LBA Formats Supported: Not Supported 00:16:23.904 Flexible Data Placement Supported: Not Supported 00:16:23.904 00:16:23.904 Controller Memory Buffer Support 00:16:23.904 ================================ 00:16:23.904 Supported: No 00:16:23.904 00:16:23.904 Persistent Memory Region Support 00:16:23.904 ================================ 00:16:23.904 Supported: No 00:16:23.904 00:16:23.904 Admin Command Set Attributes 00:16:23.904 ============================ 00:16:23.904 Security Send/Receive: Not Supported 00:16:23.904 Format NVM: Not Supported 00:16:23.904 Firmware Activate/Download: Not Supported 00:16:23.904 Namespace Management: Not Supported 00:16:23.904 Device Self-Test: Not Supported 00:16:23.904 Directives: Not Supported 00:16:23.904 NVMe-MI: Not Supported 00:16:23.904 Virtualization Management: Not Supported 00:16:23.904 Doorbell Buffer Config: Not Supported 00:16:23.904 Get LBA Status Capability: Not Supported 00:16:23.904 Command & Feature Lockdown Capability: Not Supported 00:16:23.904 Abort Command Limit: 4 00:16:23.904 Async Event Request Limit: 4 00:16:23.904 Number of Firmware Slots: N/A 00:16:23.905 Firmware Slot 1 Read-Only: N/A 00:16:23.905 Firmware Activation Without Reset: N/A 00:16:23.905 Multiple Update Detection Support: N/A 00:16:23.905 Firmware Update Granularity: No Information Provided 00:16:23.905 Per-Namespace SMART Log: No 00:16:23.905 Asymmetric Namespace Access Log Page: Not Supported 00:16:23.905 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:23.905 Command Effects Log Page: Supported 00:16:23.905 Get Log Page Extended Data: Supported 00:16:23.905 Telemetry Log Pages: Not Supported 00:16:23.905 Persistent Event Log Pages: Not Supported 00:16:23.905 Supported Log Pages Log Page: May Support 00:16:23.905 Commands Supported & Effects Log Page: Not Supported 00:16:23.905 Feature Identifiers & Effects Log Page:May Support 00:16:23.905 NVMe-MI Commands & Effects Log Page: May Support 00:16:23.905 Data Area 4 for Telemetry Log: Not Supported 00:16:23.905 Error Log Page Entries Supported: 128 00:16:23.905 Keep Alive: Supported 00:16:23.905 Keep Alive Granularity: 10000 ms 00:16:23.905 00:16:23.905 NVM Command Set Attributes 00:16:23.905 ========================== 00:16:23.905 Submission Queue Entry Size 00:16:23.905 Max: 64 00:16:23.905 Min: 64 00:16:23.905 Completion Queue Entry Size 00:16:23.905 Max: 16 00:16:23.905 Min: 16 00:16:23.905 Number of Namespaces: 32 00:16:23.905 Compare Command: Supported 00:16:23.905 Write Uncorrectable Command: Not Supported 00:16:23.905 Dataset Management Command: Supported 00:16:23.905 Write Zeroes Command: Supported 00:16:23.905 Set Features Save Field: Not Supported 00:16:23.905 Reservations: Not Supported 00:16:23.905 Timestamp: Not Supported 00:16:23.905 Copy: Supported 00:16:23.905 Volatile Write Cache: Present 00:16:23.905 Atomic Write Unit (Normal): 1 00:16:23.905 Atomic Write Unit (PFail): 1 00:16:23.905 Atomic Compare & Write Unit: 1 00:16:23.905 Fused Compare & Write: Supported 00:16:23.905 Scatter-Gather List 00:16:23.905 SGL Command Set: Supported (Dword aligned) 00:16:23.905 SGL Keyed: Not Supported 00:16:23.905 SGL Bit Bucket Descriptor: Not Supported 00:16:23.905 SGL Metadata Pointer: Not Supported 00:16:23.905 Oversized SGL: Not Supported 00:16:23.905 SGL Metadata Address: Not Supported 00:16:23.905 SGL Offset: Not Supported 00:16:23.905 Transport SGL Data Block: Not Supported 00:16:23.905 Replay Protected Memory Block: Not Supported 00:16:23.905 00:16:23.905 Firmware Slot Information 00:16:23.905 ========================= 00:16:23.905 Active slot: 1 00:16:23.905 Slot 1 Firmware Revision: 24.09 00:16:23.905 00:16:23.905 00:16:23.905 Commands Supported and Effects 00:16:23.905 ============================== 00:16:23.905 Admin Commands 00:16:23.905 -------------- 00:16:23.905 Get Log Page (02h): Supported 00:16:23.905 Identify (06h): Supported 00:16:23.905 Abort (08h): Supported 00:16:23.905 Set Features (09h): Supported 00:16:23.905 Get Features (0Ah): Supported 00:16:23.905 Asynchronous Event Request (0Ch): Supported 00:16:23.905 Keep Alive (18h): Supported 00:16:23.905 I/O Commands 00:16:23.905 ------------ 00:16:23.905 Flush (00h): Supported LBA-Change 00:16:23.905 Write (01h): Supported LBA-Change 00:16:23.905 Read (02h): Supported 00:16:23.905 Compare (05h): Supported 00:16:23.905 Write Zeroes (08h): Supported LBA-Change 00:16:23.905 Dataset Management (09h): Supported LBA-Change 00:16:23.905 Copy (19h): Supported LBA-Change 00:16:23.905 00:16:23.905 Error Log 00:16:23.905 ========= 00:16:23.905 00:16:23.905 Arbitration 00:16:23.905 =========== 00:16:23.905 Arbitration Burst: 1 00:16:23.905 00:16:23.905 Power Management 00:16:23.905 ================ 00:16:23.905 Number of Power States: 1 00:16:23.905 Current Power State: Power State #0 00:16:23.905 Power State #0: 00:16:23.905 Max Power: 0.00 W 00:16:23.905 Non-Operational State: Operational 00:16:23.905 Entry Latency: Not Reported 00:16:23.905 Exit Latency: Not Reported 00:16:23.905 Relative Read Throughput: 0 00:16:23.905 Relative Read Latency: 0 00:16:23.905 Relative Write Throughput: 0 00:16:23.905 Relative Write Latency: 0 00:16:23.905 Idle Power: Not Reported 00:16:23.905 Active Power: Not Reported 00:16:23.905 Non-Operational Permissive Mode: Not Supported 00:16:23.905 00:16:23.905 Health Information 00:16:23.905 ================== 00:16:23.905 Critical Warnings: 00:16:23.905 Available Spare Space: OK 00:16:23.905 Temperature: OK 00:16:23.905 Device Reliability: OK 00:16:23.905 Read Only: No 00:16:23.905 Volatile Memory Backup: OK 00:16:23.905 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:23.905 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:23.905 Available Spare: 0% 00:16:23.905 Available Sp[2024-07-25 10:31:27.426850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:23.905 [2024-07-25 10:31:27.434720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:23.905 [2024-07-25 10:31:27.434750] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:23.905 [2024-07-25 10:31:27.434760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.905 [2024-07-25 10:31:27.434768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.905 [2024-07-25 10:31:27.434776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.905 [2024-07-25 10:31:27.434783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.905 [2024-07-25 10:31:27.434835] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:23.905 [2024-07-25 10:31:27.434847] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:23.905 [2024-07-25 10:31:27.435842] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:23.905 [2024-07-25 10:31:27.435887] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:23.905 [2024-07-25 10:31:27.435894] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:23.905 [2024-07-25 10:31:27.436845] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:23.905 [2024-07-25 10:31:27.436858] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:23.905 [2024-07-25 10:31:27.436904] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:23.905 [2024-07-25 10:31:27.437972] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:23.905 are Threshold: 0% 00:16:23.905 Life Percentage Used: 0% 00:16:23.905 Data Units Read: 0 00:16:23.905 Data Units Written: 0 00:16:23.905 Host Read Commands: 0 00:16:23.905 Host Write Commands: 0 00:16:23.905 Controller Busy Time: 0 minutes 00:16:23.905 Power Cycles: 0 00:16:23.905 Power On Hours: 0 hours 00:16:23.905 Unsafe Shutdowns: 0 00:16:23.905 Unrecoverable Media Errors: 0 00:16:23.905 Lifetime Error Log Entries: 0 00:16:23.905 Warning Temperature Time: 0 minutes 00:16:23.905 Critical Temperature Time: 0 minutes 00:16:23.905 00:16:23.905 Number of Queues 00:16:23.905 ================ 00:16:23.905 Number of I/O Submission Queues: 127 00:16:23.905 Number of I/O Completion Queues: 127 00:16:23.905 00:16:23.905 Active Namespaces 00:16:23.905 ================= 00:16:23.905 Namespace ID:1 00:16:23.905 Error Recovery Timeout: Unlimited 00:16:23.905 Command Set Identifier: NVM (00h) 00:16:23.905 Deallocate: Supported 00:16:23.905 Deallocated/Unwritten Error: Not Supported 00:16:23.905 Deallocated Read Value: Unknown 00:16:23.905 Deallocate in Write Zeroes: Not Supported 00:16:23.905 Deallocated Guard Field: 0xFFFF 00:16:23.905 Flush: Supported 00:16:23.905 Reservation: Supported 00:16:23.905 Namespace Sharing Capabilities: Multiple Controllers 00:16:23.905 Size (in LBAs): 131072 (0GiB) 00:16:23.905 Capacity (in LBAs): 131072 (0GiB) 00:16:23.905 Utilization (in LBAs): 131072 (0GiB) 00:16:23.905 NGUID: 829CE371D851472390B7A4F7800DEE8B 00:16:23.905 UUID: 829ce371-d851-4723-90b7-a4f7800dee8b 00:16:23.905 Thin Provisioning: Not Supported 00:16:23.905 Per-NS Atomic Units: Yes 00:16:23.905 Atomic Boundary Size (Normal): 0 00:16:23.905 Atomic Boundary Size (PFail): 0 00:16:23.905 Atomic Boundary Offset: 0 00:16:23.905 Maximum Single Source Range Length: 65535 00:16:23.905 Maximum Copy Length: 65535 00:16:23.905 Maximum Source Range Count: 1 00:16:23.905 NGUID/EUI64 Never Reused: No 00:16:23.905 Namespace Write Protected: No 00:16:23.905 Number of LBA Formats: 1 00:16:23.905 Current LBA Format: LBA Format #00 00:16:23.905 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:23.905 00:16:23.906 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:23.906 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.164 [2024-07-25 10:31:27.645662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:29.424 Initializing NVMe Controllers 00:16:29.424 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:29.424 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:29.425 Initialization complete. Launching workers. 00:16:29.425 ======================================================== 00:16:29.425 Latency(us) 00:16:29.425 Device Information : IOPS MiB/s Average min max 00:16:29.425 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39958.65 156.09 3203.15 923.04 6690.53 00:16:29.425 ======================================================== 00:16:29.425 Total : 39958.65 156.09 3203.15 923.04 6690.53 00:16:29.425 00:16:29.425 [2024-07-25 10:31:32.750969] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:29.425 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:29.425 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.425 [2024-07-25 10:31:32.971651] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:34.689 Initializing NVMe Controllers 00:16:34.689 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:34.689 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:34.689 Initialization complete. Launching workers. 00:16:34.689 ======================================================== 00:16:34.689 Latency(us) 00:16:34.689 Device Information : IOPS MiB/s Average min max 00:16:34.689 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39930.42 155.98 3205.41 930.25 7041.96 00:16:34.689 ======================================================== 00:16:34.689 Total : 39930.42 155.98 3205.41 930.25 7041.96 00:16:34.689 00:16:34.689 [2024-07-25 10:31:37.992281] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:34.690 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:34.690 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.690 [2024-07-25 10:31:38.214906] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:39.950 [2024-07-25 10:31:43.360823] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:39.950 Initializing NVMe Controllers 00:16:39.950 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:39.950 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:39.950 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:39.950 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:39.950 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:39.950 Initialization complete. Launching workers. 00:16:39.950 Starting thread on core 2 00:16:39.950 Starting thread on core 3 00:16:39.950 Starting thread on core 1 00:16:39.950 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:39.950 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.210 [2024-07-25 10:31:43.665147] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:43.496 [2024-07-25 10:31:46.725725] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:43.496 Initializing NVMe Controllers 00:16:43.496 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:43.496 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:43.496 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:43.496 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:43.496 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:43.496 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:43.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:43.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:43.496 Initialization complete. Launching workers. 00:16:43.496 Starting thread on core 1 with urgent priority queue 00:16:43.496 Starting thread on core 2 with urgent priority queue 00:16:43.496 Starting thread on core 3 with urgent priority queue 00:16:43.496 Starting thread on core 0 with urgent priority queue 00:16:43.496 SPDK bdev Controller (SPDK2 ) core 0: 8085.67 IO/s 12.37 secs/100000 ios 00:16:43.496 SPDK bdev Controller (SPDK2 ) core 1: 8054.00 IO/s 12.42 secs/100000 ios 00:16:43.496 SPDK bdev Controller (SPDK2 ) core 2: 10135.00 IO/s 9.87 secs/100000 ios 00:16:43.496 SPDK bdev Controller (SPDK2 ) core 3: 8475.67 IO/s 11.80 secs/100000 ios 00:16:43.496 ======================================================== 00:16:43.496 00:16:43.496 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:43.496 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.496 [2024-07-25 10:31:47.012016] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:43.496 Initializing NVMe Controllers 00:16:43.496 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:43.496 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:43.496 Namespace ID: 1 size: 0GB 00:16:43.496 Initialization complete. 00:16:43.496 INFO: using host memory buffer for IO 00:16:43.496 Hello world! 00:16:43.496 [2024-07-25 10:31:47.025101] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:43.497 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:43.497 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.754 [2024-07-25 10:31:47.310486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:45.129 Initializing NVMe Controllers 00:16:45.129 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:45.129 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:45.129 Initialization complete. Launching workers. 00:16:45.129 submit (in ns) avg, min, max = 5923.0, 3028.0, 4994850.4 00:16:45.129 complete (in ns) avg, min, max = 20980.7, 1678.4, 4994136.8 00:16:45.129 00:16:45.129 Submit histogram 00:16:45.129 ================ 00:16:45.129 Range in us Cumulative Count 00:16:45.129 3.021 - 3.034: 0.0059% ( 1) 00:16:45.129 3.034 - 3.046: 0.0176% ( 2) 00:16:45.129 3.046 - 3.059: 0.0235% ( 1) 00:16:45.129 3.059 - 3.072: 0.0705% ( 8) 00:16:45.129 3.072 - 3.085: 0.1174% ( 8) 00:16:45.129 3.085 - 3.098: 0.2525% ( 23) 00:16:45.129 3.098 - 3.110: 0.9102% ( 112) 00:16:45.129 3.110 - 3.123: 2.2901% ( 235) 00:16:45.129 3.123 - 3.136: 4.9560% ( 454) 00:16:45.129 3.136 - 3.149: 8.2149% ( 555) 00:16:45.129 3.149 - 3.162: 12.3077% ( 697) 00:16:45.129 3.162 - 3.174: 17.9859% ( 967) 00:16:45.129 3.174 - 3.187: 23.9812% ( 1021) 00:16:45.129 3.187 - 3.200: 29.7651% ( 985) 00:16:45.129 3.200 - 3.213: 36.2008% ( 1096) 00:16:45.129 3.213 - 3.226: 43.1180% ( 1178) 00:16:45.129 3.226 - 3.238: 49.5185% ( 1090) 00:16:45.129 3.238 - 3.251: 54.1809% ( 794) 00:16:45.130 3.251 - 3.264: 57.4516% ( 557) 00:16:45.130 3.264 - 3.277: 61.1216% ( 625) 00:16:45.130 3.277 - 3.302: 66.2948% ( 881) 00:16:45.130 3.302 - 3.328: 71.1685% ( 830) 00:16:45.130 3.328 - 3.354: 77.9272% ( 1151) 00:16:45.130 3.354 - 3.379: 83.6641% ( 977) 00:16:45.130 3.379 - 3.405: 86.7939% ( 533) 00:16:45.130 3.405 - 3.430: 88.2971% ( 256) 00:16:45.130 3.430 - 3.456: 89.2073% ( 155) 00:16:45.130 3.456 - 3.482: 90.2055% ( 170) 00:16:45.130 3.482 - 3.507: 91.7381% ( 261) 00:16:45.130 3.507 - 3.533: 93.4351% ( 289) 00:16:45.130 3.533 - 3.558: 94.7504% ( 224) 00:16:45.130 3.558 - 3.584: 96.0834% ( 227) 00:16:45.130 3.584 - 3.610: 97.1932% ( 189) 00:16:45.130 3.610 - 3.635: 98.0916% ( 153) 00:16:45.130 3.635 - 3.661: 98.7610% ( 114) 00:16:45.130 3.661 - 3.686: 99.1192% ( 61) 00:16:45.130 3.686 - 3.712: 99.5068% ( 66) 00:16:45.130 3.712 - 3.738: 99.6359% ( 22) 00:16:45.130 3.738 - 3.763: 99.6888% ( 9) 00:16:45.130 3.763 - 3.789: 99.7005% ( 2) 00:16:45.130 3.789 - 3.814: 99.7240% ( 4) 00:16:45.130 3.866 - 3.891: 99.7299% ( 1) 00:16:45.130 5.376 - 5.402: 99.7358% ( 1) 00:16:45.130 5.402 - 5.427: 99.7416% ( 1) 00:16:45.130 5.453 - 5.478: 99.7475% ( 1) 00:16:45.130 5.478 - 5.504: 99.7592% ( 2) 00:16:45.130 5.530 - 5.555: 99.7651% ( 1) 00:16:45.130 5.632 - 5.658: 99.7710% ( 1) 00:16:45.130 5.658 - 5.683: 99.7769% ( 1) 00:16:45.130 5.683 - 5.709: 99.7945% ( 3) 00:16:45.130 5.709 - 5.734: 99.8180% ( 4) 00:16:45.130 5.734 - 5.760: 99.8238% ( 1) 00:16:45.130 5.760 - 5.786: 99.8356% ( 2) 00:16:45.130 5.786 - 5.811: 99.8415% ( 1) 00:16:45.130 5.990 - 6.016: 99.8473% ( 1) 00:16:45.130 6.426 - 6.451: 99.8532% ( 1) 00:16:45.130 6.477 - 6.502: 99.8591% ( 1) 00:16:45.130 6.554 - 6.605: 99.8767% ( 3) 00:16:45.130 6.605 - 6.656: 99.8826% ( 1) 00:16:45.130 6.861 - 6.912: 99.8884% ( 1) 00:16:45.130 7.014 - 7.066: 99.9002% ( 2) 00:16:45.130 7.168 - 7.219: 99.9119% ( 2) 00:16:45.130 7.270 - 7.322: 99.9237% ( 2) 00:16:45.130 8.243 - 8.294: 99.9295% ( 1) 00:16:45.130 8.448 - 8.499: 99.9354% ( 1) 00:16:45.130 3984.589 - 4010.803: 99.9941% ( 10) 00:16:45.130 4980.736 - 5006.950: 100.0000% ( 1) 00:16:45.130 00:16:45.130 Complete histogram 00:16:45.130 ================== 00:16:45.130 Range in us Cumulative Count 00:16:45.130 1.677 - 1.690: 0.0528% ( 9) 00:16:45.130 1.690 - 1.702: 0.0705% ( 3) 00:16:45.130 1.702 - 1.715: 0.0881% ( 3) 00:16:45.130 1.715 - 1.728: 2.5308% ( 416) 00:16:45.130 1.728 - 1.741: 17.7158% ( 2586) 00:16:45.130 1.741 - 1.754: 25.0617% ( 1251) 00:16:45.130 1.754 - 1.766: 28.4087% ( 570) 00:16:45.130 1.766 - 1.779: 59.8180% ( 5349) 00:16:45.130 1.779 - 1.792: 91.1509% ( 5336) 00:16:45.130 1.792 - 1.805: 96.5003% ( 911) 00:16:45.130 1.805 - [2024-07-25 10:31:48.404578] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:45.130 1.818: 97.7628% ( 215) 00:16:45.130 1.818 - 1.830: 98.0094% ( 42) 00:16:45.130 1.830 - 1.843: 98.3265% ( 54) 00:16:45.130 1.843 - 1.856: 98.7845% ( 78) 00:16:45.130 1.856 - 1.869: 99.0781% ( 50) 00:16:45.130 1.869 - 1.882: 99.1955% ( 20) 00:16:45.130 1.882 - 1.894: 99.2425% ( 8) 00:16:45.130 1.894 - 1.907: 99.2719% ( 5) 00:16:45.130 1.907 - 1.920: 99.2836% ( 2) 00:16:45.130 1.920 - 1.933: 99.2895% ( 1) 00:16:45.130 1.933 - 1.946: 99.3012% ( 2) 00:16:45.130 1.946 - 1.958: 99.3071% ( 1) 00:16:45.130 1.958 - 1.971: 99.3130% ( 1) 00:16:45.130 1.971 - 1.984: 99.3188% ( 1) 00:16:45.130 2.010 - 2.022: 99.3247% ( 1) 00:16:45.130 2.022 - 2.035: 99.3306% ( 1) 00:16:45.130 2.074 - 2.086: 99.3365% ( 1) 00:16:45.130 2.086 - 2.099: 99.3423% ( 1) 00:16:45.130 2.138 - 2.150: 99.3482% ( 1) 00:16:45.130 2.163 - 2.176: 99.3600% ( 2) 00:16:45.130 3.994 - 4.019: 99.3717% ( 2) 00:16:45.130 4.045 - 4.070: 99.3776% ( 1) 00:16:45.130 4.096 - 4.122: 99.3834% ( 1) 00:16:45.130 4.250 - 4.275: 99.3952% ( 2) 00:16:45.130 4.301 - 4.326: 99.4011% ( 1) 00:16:45.130 4.429 - 4.454: 99.4128% ( 2) 00:16:45.130 4.608 - 4.634: 99.4245% ( 2) 00:16:45.130 4.787 - 4.813: 99.4304% ( 1) 00:16:45.130 4.890 - 4.915: 99.4363% ( 1) 00:16:45.130 4.992 - 5.018: 99.4422% ( 1) 00:16:45.130 5.120 - 5.146: 99.4480% ( 1) 00:16:45.130 5.146 - 5.171: 99.4539% ( 1) 00:16:45.130 5.222 - 5.248: 99.4656% ( 2) 00:16:45.130 5.325 - 5.350: 99.4715% ( 1) 00:16:45.130 5.350 - 5.376: 99.4774% ( 1) 00:16:45.130 5.658 - 5.683: 99.4833% ( 1) 00:16:45.130 5.811 - 5.837: 99.4891% ( 1) 00:16:45.130 6.016 - 6.042: 99.4950% ( 1) 00:16:45.130 6.502 - 6.528: 99.5009% ( 1) 00:16:45.130 6.912 - 6.963: 99.5068% ( 1) 00:16:45.130 10.086 - 10.138: 99.5126% ( 1) 00:16:45.130 12.698 - 12.749: 99.5185% ( 1) 00:16:45.130 3014.656 - 3027.763: 99.5244% ( 1) 00:16:45.130 3486.515 - 3512.730: 99.5302% ( 1) 00:16:45.130 3984.589 - 4010.803: 99.9941% ( 79) 00:16:45.130 4980.736 - 5006.950: 100.0000% ( 1) 00:16:45.130 00:16:45.130 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:45.130 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:45.130 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:45.130 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:45.130 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:45.130 [ 00:16:45.130 { 00:16:45.130 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:45.130 "subtype": "Discovery", 00:16:45.130 "listen_addresses": [], 00:16:45.130 "allow_any_host": true, 00:16:45.130 "hosts": [] 00:16:45.130 }, 00:16:45.130 { 00:16:45.130 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:45.130 "subtype": "NVMe", 00:16:45.130 "listen_addresses": [ 00:16:45.130 { 00:16:45.130 "trtype": "VFIOUSER", 00:16:45.130 "adrfam": "IPv4", 00:16:45.130 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:45.130 "trsvcid": "0" 00:16:45.130 } 00:16:45.130 ], 00:16:45.130 "allow_any_host": true, 00:16:45.130 "hosts": [], 00:16:45.130 "serial_number": "SPDK1", 00:16:45.130 "model_number": "SPDK bdev Controller", 00:16:45.130 "max_namespaces": 32, 00:16:45.130 "min_cntlid": 1, 00:16:45.130 "max_cntlid": 65519, 00:16:45.130 "namespaces": [ 00:16:45.130 { 00:16:45.130 "nsid": 1, 00:16:45.130 "bdev_name": "Malloc1", 00:16:45.130 "name": "Malloc1", 00:16:45.130 "nguid": "F0EA531497AB4D77B9AA3EA52737EB34", 00:16:45.130 "uuid": "f0ea5314-97ab-4d77-b9aa-3ea52737eb34" 00:16:45.130 }, 00:16:45.130 { 00:16:45.130 "nsid": 2, 00:16:45.130 "bdev_name": "Malloc3", 00:16:45.130 "name": "Malloc3", 00:16:45.130 "nguid": "48FCA288ABC0431CBF3B9DA6FAD7F079", 00:16:45.130 "uuid": "48fca288-abc0-431c-bf3b-9da6fad7f079" 00:16:45.130 } 00:16:45.130 ] 00:16:45.130 }, 00:16:45.130 { 00:16:45.130 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:45.130 "subtype": "NVMe", 00:16:45.130 "listen_addresses": [ 00:16:45.130 { 00:16:45.130 "trtype": "VFIOUSER", 00:16:45.130 "adrfam": "IPv4", 00:16:45.130 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:45.130 "trsvcid": "0" 00:16:45.130 } 00:16:45.130 ], 00:16:45.130 "allow_any_host": true, 00:16:45.130 "hosts": [], 00:16:45.130 "serial_number": "SPDK2", 00:16:45.130 "model_number": "SPDK bdev Controller", 00:16:45.130 "max_namespaces": 32, 00:16:45.130 "min_cntlid": 1, 00:16:45.130 "max_cntlid": 65519, 00:16:45.130 "namespaces": [ 00:16:45.130 { 00:16:45.130 "nsid": 1, 00:16:45.130 "bdev_name": "Malloc2", 00:16:45.130 "name": "Malloc2", 00:16:45.130 "nguid": "829CE371D851472390B7A4F7800DEE8B", 00:16:45.130 "uuid": "829ce371-d851-4723-90b7-a4f7800dee8b" 00:16:45.130 } 00:16:45.130 ] 00:16:45.130 } 00:16:45.130 ] 00:16:45.130 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:45.130 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3875150 00:16:45.130 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:45.130 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:45.130 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:45.131 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:45.131 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:45.131 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:45.131 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:45.131 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:45.131 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.131 [2024-07-25 10:31:48.807116] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:45.131 Malloc4 00:16:45.389 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:45.389 [2024-07-25 10:31:49.001560] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:45.389 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:45.389 Asynchronous Event Request test 00:16:45.389 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:45.389 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:45.389 Registering asynchronous event callbacks... 00:16:45.389 Starting namespace attribute notice tests for all controllers... 00:16:45.389 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:45.389 aer_cb - Changed Namespace 00:16:45.389 Cleaning up... 00:16:45.647 [ 00:16:45.647 { 00:16:45.647 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:45.647 "subtype": "Discovery", 00:16:45.647 "listen_addresses": [], 00:16:45.647 "allow_any_host": true, 00:16:45.647 "hosts": [] 00:16:45.647 }, 00:16:45.647 { 00:16:45.647 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:45.647 "subtype": "NVMe", 00:16:45.647 "listen_addresses": [ 00:16:45.647 { 00:16:45.647 "trtype": "VFIOUSER", 00:16:45.647 "adrfam": "IPv4", 00:16:45.647 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:45.647 "trsvcid": "0" 00:16:45.647 } 00:16:45.647 ], 00:16:45.647 "allow_any_host": true, 00:16:45.648 "hosts": [], 00:16:45.648 "serial_number": "SPDK1", 00:16:45.648 "model_number": "SPDK bdev Controller", 00:16:45.648 "max_namespaces": 32, 00:16:45.648 "min_cntlid": 1, 00:16:45.648 "max_cntlid": 65519, 00:16:45.648 "namespaces": [ 00:16:45.648 { 00:16:45.648 "nsid": 1, 00:16:45.648 "bdev_name": "Malloc1", 00:16:45.648 "name": "Malloc1", 00:16:45.648 "nguid": "F0EA531497AB4D77B9AA3EA52737EB34", 00:16:45.648 "uuid": "f0ea5314-97ab-4d77-b9aa-3ea52737eb34" 00:16:45.648 }, 00:16:45.648 { 00:16:45.648 "nsid": 2, 00:16:45.648 "bdev_name": "Malloc3", 00:16:45.648 "name": "Malloc3", 00:16:45.648 "nguid": "48FCA288ABC0431CBF3B9DA6FAD7F079", 00:16:45.648 "uuid": "48fca288-abc0-431c-bf3b-9da6fad7f079" 00:16:45.648 } 00:16:45.648 ] 00:16:45.648 }, 00:16:45.648 { 00:16:45.648 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:45.648 "subtype": "NVMe", 00:16:45.648 "listen_addresses": [ 00:16:45.648 { 00:16:45.648 "trtype": "VFIOUSER", 00:16:45.648 "adrfam": "IPv4", 00:16:45.648 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:45.648 "trsvcid": "0" 00:16:45.648 } 00:16:45.648 ], 00:16:45.648 "allow_any_host": true, 00:16:45.648 "hosts": [], 00:16:45.648 "serial_number": "SPDK2", 00:16:45.648 "model_number": "SPDK bdev Controller", 00:16:45.648 "max_namespaces": 32, 00:16:45.648 "min_cntlid": 1, 00:16:45.648 "max_cntlid": 65519, 00:16:45.648 "namespaces": [ 00:16:45.648 { 00:16:45.648 "nsid": 1, 00:16:45.648 "bdev_name": "Malloc2", 00:16:45.648 "name": "Malloc2", 00:16:45.648 "nguid": "829CE371D851472390B7A4F7800DEE8B", 00:16:45.648 "uuid": "829ce371-d851-4723-90b7-a4f7800dee8b" 00:16:45.648 }, 00:16:45.648 { 00:16:45.648 "nsid": 2, 00:16:45.648 "bdev_name": "Malloc4", 00:16:45.648 "name": "Malloc4", 00:16:45.648 "nguid": "3576F6112FBB4B22857495EB607697DA", 00:16:45.648 "uuid": "3576f611-2fbb-4b22-8574-95eb607697da" 00:16:45.648 } 00:16:45.648 ] 00:16:45.648 } 00:16:45.648 ] 00:16:45.648 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3875150 00:16:45.648 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:45.648 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3867256 00:16:45.648 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3867256 ']' 00:16:45.648 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3867256 00:16:45.648 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:45.648 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:45.648 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3867256 00:16:45.648 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:45.648 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:45.648 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3867256' 00:16:45.648 killing process with pid 3867256 00:16:45.648 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3867256 00:16:45.648 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3867256 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3875222 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3875222' 00:16:45.907 Process pid: 3875222 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3875222 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3875222 ']' 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:45.907 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:45.907 [2024-07-25 10:31:49.563106] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:45.907 [2024-07-25 10:31:49.563985] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:16:45.907 [2024-07-25 10:31:49.564027] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.907 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.166 [2024-07-25 10:31:49.633784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.166 [2024-07-25 10:31:49.697362] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.166 [2024-07-25 10:31:49.697405] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.166 [2024-07-25 10:31:49.697414] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.166 [2024-07-25 10:31:49.697422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.166 [2024-07-25 10:31:49.697429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.166 [2024-07-25 10:31:49.697527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.166 [2024-07-25 10:31:49.697641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.166 [2024-07-25 10:31:49.697707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.166 [2024-07-25 10:31:49.697709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.166 [2024-07-25 10:31:49.776492] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:46.166 [2024-07-25 10:31:49.776647] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:46.166 [2024-07-25 10:31:49.776876] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:46.166 [2024-07-25 10:31:49.777223] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:46.166 [2024-07-25 10:31:49.777471] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:46.731 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:46.731 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:46.731 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:48.105 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:48.105 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:48.105 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:48.105 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:48.105 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:48.105 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:48.105 Malloc1 00:16:48.105 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:48.364 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:48.622 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:48.622 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:48.622 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:48.622 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:48.881 Malloc2 00:16:48.881 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:49.139 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:49.139 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:49.398 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:49.398 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3875222 00:16:49.398 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3875222 ']' 00:16:49.398 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3875222 00:16:49.398 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:49.398 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.398 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3875222 00:16:49.398 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:49.398 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:49.398 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3875222' 00:16:49.398 killing process with pid 3875222 00:16:49.398 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3875222 00:16:49.398 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3875222 00:16:49.657 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:49.657 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:49.657 00:16:49.657 real 0m51.430s 00:16:49.657 user 3m22.456s 00:16:49.657 sys 0m4.699s 00:16:49.657 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:49.657 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:49.657 ************************************ 00:16:49.657 END TEST nvmf_vfio_user 00:16:49.657 ************************************ 00:16:49.657 10:31:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:49.657 10:31:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:49.657 10:31:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:49.657 10:31:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:49.915 ************************************ 00:16:49.915 START TEST nvmf_vfio_user_nvme_compliance 00:16:49.915 ************************************ 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:49.915 * Looking for test storage... 00:16:49.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3876068 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3876068' 00:16:49.915 Process pid: 3876068 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3876068 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 3876068 ']' 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:49.915 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:49.915 [2024-07-25 10:31:53.562248] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:16:49.915 [2024-07-25 10:31:53.562297] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.915 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.173 [2024-07-25 10:31:53.633360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:50.173 [2024-07-25 10:31:53.706747] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.173 [2024-07-25 10:31:53.706786] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.173 [2024-07-25 10:31:53.706796] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.173 [2024-07-25 10:31:53.706804] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.173 [2024-07-25 10:31:53.706811] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.173 [2024-07-25 10:31:53.706864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.173 [2024-07-25 10:31:53.706882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.173 [2024-07-25 10:31:53.706884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.752 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.752 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:16:50.752 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:51.685 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:51.685 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:51.685 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:51.686 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.686 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:51.686 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.686 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:51.686 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:51.686 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.686 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:51.943 malloc0 00:16:51.943 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.943 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:51.943 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.943 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:51.943 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.943 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:51.943 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.943 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:51.943 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.943 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:51.943 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.943 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:51.943 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.943 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:51.943 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.943 00:16:51.943 00:16:51.943 CUnit - A unit testing framework for C - Version 2.1-3 00:16:51.943 http://cunit.sourceforge.net/ 00:16:51.943 00:16:51.943 00:16:51.943 Suite: nvme_compliance 00:16:51.943 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 10:31:55.613668] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:51.943 [2024-07-25 10:31:55.615013] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:51.943 [2024-07-25 10:31:55.615028] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:51.943 [2024-07-25 10:31:55.615036] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:51.943 [2024-07-25 10:31:55.619698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:51.943 passed 00:16:52.201 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 10:31:55.695275] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:52.201 [2024-07-25 10:31:55.698295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:52.201 passed 00:16:52.201 Test: admin_identify_ns ...[2024-07-25 10:31:55.778057] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:52.201 [2024-07-25 10:31:55.837730] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:52.201 [2024-07-25 10:31:55.845725] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:52.201 [2024-07-25 10:31:55.866853] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:52.201 passed 00:16:52.458 Test: admin_get_features_mandatory_features ...[2024-07-25 10:31:55.942656] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:52.458 [2024-07-25 10:31:55.945675] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:52.458 passed 00:16:52.458 Test: admin_get_features_optional_features ...[2024-07-25 10:31:56.021178] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:52.458 [2024-07-25 10:31:56.024195] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:52.458 passed 00:16:52.458 Test: admin_set_features_number_of_queues ...[2024-07-25 10:31:56.099824] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:52.716 [2024-07-25 10:31:56.206798] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:52.716 passed 00:16:52.716 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 10:31:56.281295] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:52.716 [2024-07-25 10:31:56.284319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:52.716 passed 00:16:52.716 Test: admin_get_log_page_with_lpo ...[2024-07-25 10:31:56.357863] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:52.973 [2024-07-25 10:31:56.428726] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:52.973 [2024-07-25 10:31:56.441801] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:52.973 passed 00:16:52.973 Test: fabric_property_get ...[2024-07-25 10:31:56.514248] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:52.973 [2024-07-25 10:31:56.515493] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:52.973 [2024-07-25 10:31:56.517276] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:52.973 passed 00:16:52.973 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 10:31:56.594787] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:52.973 [2024-07-25 10:31:56.596024] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:52.973 [2024-07-25 10:31:56.597803] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:52.973 passed 00:16:52.973 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 10:31:56.672872] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:53.230 [2024-07-25 10:31:56.757721] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:53.230 [2024-07-25 10:31:56.773721] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:53.230 [2024-07-25 10:31:56.778810] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:53.230 passed 00:16:53.230 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 10:31:56.853105] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:53.230 [2024-07-25 10:31:56.854340] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:53.230 [2024-07-25 10:31:56.856124] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:53.230 passed 00:16:53.230 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 10:31:56.933671] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:53.488 [2024-07-25 10:31:57.006729] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:53.488 [2024-07-25 10:31:57.030721] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:53.488 [2024-07-25 10:31:57.035811] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:53.488 passed 00:16:53.488 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 10:31:57.110104] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:53.488 [2024-07-25 10:31:57.111343] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:53.488 [2024-07-25 10:31:57.111371] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:53.488 [2024-07-25 10:31:57.113127] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:53.488 passed 00:16:53.488 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 10:31:57.188652] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:53.745 [2024-07-25 10:31:57.281726] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:53.745 [2024-07-25 10:31:57.289725] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:53.745 [2024-07-25 10:31:57.297728] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:53.745 [2024-07-25 10:31:57.305721] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:53.745 [2024-07-25 10:31:57.333804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:53.745 passed 00:16:53.745 Test: admin_create_io_sq_verify_pc ...[2024-07-25 10:31:57.406267] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:53.746 [2024-07-25 10:31:57.421732] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:53.746 [2024-07-25 10:31:57.439439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:54.003 passed 00:16:54.003 Test: admin_create_io_qp_max_qps ...[2024-07-25 10:31:57.515969] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:54.933 [2024-07-25 10:31:58.623725] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:55.498 [2024-07-25 10:31:59.006984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:55.498 passed 00:16:55.498 Test: admin_create_io_sq_shared_cq ...[2024-07-25 10:31:59.080817] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:55.756 [2024-07-25 10:31:59.212725] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:55.756 [2024-07-25 10:31:59.249781] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:55.756 passed 00:16:55.756 00:16:55.756 Run Summary: Type Total Ran Passed Failed Inactive 00:16:55.756 suites 1 1 n/a 0 0 00:16:55.756 tests 18 18 18 0 0 00:16:55.756 asserts 360 360 360 0 n/a 00:16:55.756 00:16:55.756 Elapsed time = 1.494 seconds 00:16:55.756 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3876068 00:16:55.756 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 3876068 ']' 00:16:55.756 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 3876068 00:16:55.756 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:16:55.756 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:55.756 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3876068 00:16:55.756 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:55.756 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:55.756 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3876068' 00:16:55.756 killing process with pid 3876068 00:16:55.756 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 3876068 00:16:55.756 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 3876068 00:16:56.014 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:56.014 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:56.014 00:16:56.014 real 0m6.168s 00:16:56.014 user 0m17.410s 00:16:56.014 sys 0m0.698s 00:16:56.014 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:56.014 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:56.014 ************************************ 00:16:56.014 END TEST nvmf_vfio_user_nvme_compliance 00:16:56.014 ************************************ 00:16:56.014 10:31:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:56.014 10:31:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:56.014 10:31:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:56.014 10:31:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:56.014 ************************************ 00:16:56.014 START TEST nvmf_vfio_user_fuzz 00:16:56.014 ************************************ 00:16:56.014 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:56.273 * Looking for test storage... 00:16:56.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3877189 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3877189' 00:16:56.273 Process pid: 3877189 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3877189 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3877189 ']' 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:56.273 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:57.205 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:57.205 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:16:57.205 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:58.137 malloc0 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:58.137 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:30.206 Fuzzing completed. Shutting down the fuzz application 00:17:30.206 00:17:30.206 Dumping successful admin opcodes: 00:17:30.206 8, 9, 10, 24, 00:17:30.206 Dumping successful io opcodes: 00:17:30.206 0, 00:17:30.206 NS: 0x200003a1ef00 I/O qp, Total commands completed: 925514, total successful commands: 3606, random_seed: 2166042816 00:17:30.206 NS: 0x200003a1ef00 admin qp, Total commands completed: 202610, total successful commands: 1623, random_seed: 222968064 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3877189 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3877189 ']' 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 3877189 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3877189 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3877189' 00:17:30.206 killing process with pid 3877189 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 3877189 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 3877189 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:30.206 00:17:30.206 real 0m32.815s 00:17:30.206 user 0m28.964s 00:17:30.206 sys 0m33.532s 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:30.206 ************************************ 00:17:30.206 END TEST nvmf_vfio_user_fuzz 00:17:30.206 ************************************ 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:30.206 ************************************ 00:17:30.206 START TEST nvmf_auth_target 00:17:30.206 ************************************ 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:30.206 * Looking for test storage... 00:17:30.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.206 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:30.207 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:35.472 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:35.472 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:35.472 Found net devices under 0000:af:00.0: cvl_0_0 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:35.472 Found net devices under 0000:af:00.1: cvl_0_1 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.472 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:35.473 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:35.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:17:35.731 00:17:35.731 --- 10.0.0.2 ping statistics --- 00:17:35.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.731 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:17:35.731 00:17:35.731 --- 10.0.0.1 ping statistics --- 00:17:35.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.731 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:35.731 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.989 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3885820 00:17:35.989 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:35.989 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3885820 00:17:35.989 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3885820 ']' 00:17:35.989 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.989 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:35.989 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.989 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:35.989 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3886079 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dbefcb42c485d6980e60a1b20a4682aaf898e997a73c3891 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.G4h 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dbefcb42c485d6980e60a1b20a4682aaf898e997a73c3891 0 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dbefcb42c485d6980e60a1b20a4682aaf898e997a73c3891 0 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dbefcb42c485d6980e60a1b20a4682aaf898e997a73c3891 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.G4h 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.G4h 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.G4h 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d3a3b0fecc1b9b799ef1f99eb609a30cd949e32fd855c45942c86d54527cb001 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.jVY 00:17:36.922 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d3a3b0fecc1b9b799ef1f99eb609a30cd949e32fd855c45942c86d54527cb001 3 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d3a3b0fecc1b9b799ef1f99eb609a30cd949e32fd855c45942c86d54527cb001 3 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d3a3b0fecc1b9b799ef1f99eb609a30cd949e32fd855c45942c86d54527cb001 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.jVY 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.jVY 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.jVY 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9e2cab31189a9c4eb9e9c33936d64ad3 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.s00 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9e2cab31189a9c4eb9e9c33936d64ad3 1 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9e2cab31189a9c4eb9e9c33936d64ad3 1 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9e2cab31189a9c4eb9e9c33936d64ad3 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.s00 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.s00 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.s00 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cca5199d50c168a7c7b8974c20f998c71e2929c3f36a6a4f 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.d6b 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cca5199d50c168a7c7b8974c20f998c71e2929c3f36a6a4f 2 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cca5199d50c168a7c7b8974c20f998c71e2929c3f36a6a4f 2 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cca5199d50c168a7c7b8974c20f998c71e2929c3f36a6a4f 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.d6b 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.d6b 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.d6b 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b537ed367edf9bca1c3236ebd296819e2f18976066c5ec10 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Xj5 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b537ed367edf9bca1c3236ebd296819e2f18976066c5ec10 2 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b537ed367edf9bca1c3236ebd296819e2f18976066c5ec10 2 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b537ed367edf9bca1c3236ebd296819e2f18976066c5ec10 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:36.923 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Xj5 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Xj5 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Xj5 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9fc14895e2bbbdcaa8a04006c5e8b3dd 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.NBi 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9fc14895e2bbbdcaa8a04006c5e8b3dd 1 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9fc14895e2bbbdcaa8a04006c5e8b3dd 1 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9fc14895e2bbbdcaa8a04006c5e8b3dd 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.NBi 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.NBi 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.NBi 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c6369de5fa1e3a35bcb154ab4309648b2c4ade047c4a97b0688d3ef1d8e69a58 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.nH2 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c6369de5fa1e3a35bcb154ab4309648b2c4ade047c4a97b0688d3ef1d8e69a58 3 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c6369de5fa1e3a35bcb154ab4309648b2c4ade047c4a97b0688d3ef1d8e69a58 3 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c6369de5fa1e3a35bcb154ab4309648b2c4ade047c4a97b0688d3ef1d8e69a58 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.nH2 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.nH2 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.nH2 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3885820 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3885820 ']' 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.181 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.439 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:37.439 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:37.439 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3886079 /var/tmp/host.sock 00:17:37.439 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3886079 ']' 00:17:37.439 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:37.439 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.439 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:37.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:37.439 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.439 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.439 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:37.439 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:37.439 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:37.439 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.439 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.697 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.697 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:37.697 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.G4h 00:17:37.697 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.697 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.697 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.697 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.G4h 00:17:37.697 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.G4h 00:17:37.697 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.jVY ]] 00:17:37.697 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jVY 00:17:37.697 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.697 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.697 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.697 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jVY 00:17:37.697 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jVY 00:17:37.955 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:37.955 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.s00 00:17:37.955 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.955 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.955 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.955 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.s00 00:17:37.955 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.s00 00:17:38.212 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.d6b ]] 00:17:38.212 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.d6b 00:17:38.212 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.212 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.212 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.212 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.d6b 00:17:38.212 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.d6b 00:17:38.212 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:38.212 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Xj5 00:17:38.212 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.212 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.212 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.212 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Xj5 00:17:38.213 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Xj5 00:17:38.470 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.NBi ]] 00:17:38.470 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NBi 00:17:38.470 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.470 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.470 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.470 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NBi 00:17:38.470 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NBi 00:17:38.728 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:38.728 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.nH2 00:17:38.728 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.728 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.728 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.728 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.nH2 00:17:38.728 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.nH2 00:17:38.728 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:38.728 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:38.728 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.728 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.728 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:38.728 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:38.986 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:38.986 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.986 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:38.986 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:38.986 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:38.986 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.986 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.986 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.986 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.986 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.986 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.986 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.244 00:17:39.244 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.244 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.244 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.502 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.502 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.502 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.502 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.502 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.502 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.502 { 00:17:39.502 "cntlid": 1, 00:17:39.502 "qid": 0, 00:17:39.502 "state": "enabled", 00:17:39.502 "thread": "nvmf_tgt_poll_group_000", 00:17:39.502 "listen_address": { 00:17:39.502 "trtype": "TCP", 00:17:39.502 "adrfam": "IPv4", 00:17:39.502 "traddr": "10.0.0.2", 00:17:39.502 "trsvcid": "4420" 00:17:39.502 }, 00:17:39.502 "peer_address": { 00:17:39.502 "trtype": "TCP", 00:17:39.502 "adrfam": "IPv4", 00:17:39.502 "traddr": "10.0.0.1", 00:17:39.502 "trsvcid": "47646" 00:17:39.502 }, 00:17:39.502 "auth": { 00:17:39.502 "state": "completed", 00:17:39.502 "digest": "sha256", 00:17:39.502 "dhgroup": "null" 00:17:39.502 } 00:17:39.502 } 00:17:39.502 ]' 00:17:39.502 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.502 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.502 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.502 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:39.502 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.502 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.502 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.502 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.759 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:17:40.324 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.324 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:40.324 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.324 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.324 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.324 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.324 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:40.324 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:40.581 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:40.581 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.581 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:40.581 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:40.581 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:40.581 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.581 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.581 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.581 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.582 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.582 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.582 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.582 00:17:40.839 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.839 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.839 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.839 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.839 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.839 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.839 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.839 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.839 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.839 { 00:17:40.839 "cntlid": 3, 00:17:40.839 "qid": 0, 00:17:40.839 "state": "enabled", 00:17:40.839 "thread": "nvmf_tgt_poll_group_000", 00:17:40.839 "listen_address": { 00:17:40.839 "trtype": "TCP", 00:17:40.839 "adrfam": "IPv4", 00:17:40.839 "traddr": "10.0.0.2", 00:17:40.839 "trsvcid": "4420" 00:17:40.839 }, 00:17:40.839 "peer_address": { 00:17:40.840 "trtype": "TCP", 00:17:40.840 "adrfam": "IPv4", 00:17:40.840 "traddr": "10.0.0.1", 00:17:40.840 "trsvcid": "47690" 00:17:40.840 }, 00:17:40.840 "auth": { 00:17:40.840 "state": "completed", 00:17:40.840 "digest": "sha256", 00:17:40.840 "dhgroup": "null" 00:17:40.840 } 00:17:40.840 } 00:17:40.840 ]' 00:17:40.840 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.840 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.840 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.097 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:41.097 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.097 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.097 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.097 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.354 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.920 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.178 00:17:42.178 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.178 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.178 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.435 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.435 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.435 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.435 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.435 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.435 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.435 { 00:17:42.435 "cntlid": 5, 00:17:42.435 "qid": 0, 00:17:42.435 "state": "enabled", 00:17:42.435 "thread": "nvmf_tgt_poll_group_000", 00:17:42.435 "listen_address": { 00:17:42.435 "trtype": "TCP", 00:17:42.435 "adrfam": "IPv4", 00:17:42.435 "traddr": "10.0.0.2", 00:17:42.435 "trsvcid": "4420" 00:17:42.435 }, 00:17:42.435 "peer_address": { 00:17:42.435 "trtype": "TCP", 00:17:42.435 "adrfam": "IPv4", 00:17:42.435 "traddr": "10.0.0.1", 00:17:42.435 "trsvcid": "47716" 00:17:42.435 }, 00:17:42.435 "auth": { 00:17:42.436 "state": "completed", 00:17:42.436 "digest": "sha256", 00:17:42.436 "dhgroup": "null" 00:17:42.436 } 00:17:42.436 } 00:17:42.436 ]' 00:17:42.436 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.436 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.436 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.436 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:42.436 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.436 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.436 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.436 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.693 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:17:43.259 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.259 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:43.259 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.259 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.259 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.259 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.259 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:43.259 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:43.517 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:43.517 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.517 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:43.517 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:43.517 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:43.517 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.517 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:43.517 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.517 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.517 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.517 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.517 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.774 00:17:43.774 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.774 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.774 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.774 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.774 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.774 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.774 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.774 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.774 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.774 { 00:17:43.774 "cntlid": 7, 00:17:43.774 "qid": 0, 00:17:43.774 "state": "enabled", 00:17:43.774 "thread": "nvmf_tgt_poll_group_000", 00:17:43.774 "listen_address": { 00:17:43.774 "trtype": "TCP", 00:17:43.774 "adrfam": "IPv4", 00:17:43.774 "traddr": "10.0.0.2", 00:17:43.774 "trsvcid": "4420" 00:17:43.774 }, 00:17:43.774 "peer_address": { 00:17:43.774 "trtype": "TCP", 00:17:43.774 "adrfam": "IPv4", 00:17:43.774 "traddr": "10.0.0.1", 00:17:43.774 "trsvcid": "55246" 00:17:43.774 }, 00:17:43.774 "auth": { 00:17:43.774 "state": "completed", 00:17:43.774 "digest": "sha256", 00:17:43.774 "dhgroup": "null" 00:17:43.774 } 00:17:43.774 } 00:17:43.774 ]' 00:17:43.774 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.032 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.032 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.032 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:44.032 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.032 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.032 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.032 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.312 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.886 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.143 00:17:45.143 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.143 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.143 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.400 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.400 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.400 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.400 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.400 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.400 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.400 { 00:17:45.400 "cntlid": 9, 00:17:45.400 "qid": 0, 00:17:45.400 "state": "enabled", 00:17:45.400 "thread": "nvmf_tgt_poll_group_000", 00:17:45.400 "listen_address": { 00:17:45.400 "trtype": "TCP", 00:17:45.400 "adrfam": "IPv4", 00:17:45.400 "traddr": "10.0.0.2", 00:17:45.400 "trsvcid": "4420" 00:17:45.400 }, 00:17:45.400 "peer_address": { 00:17:45.400 "trtype": "TCP", 00:17:45.401 "adrfam": "IPv4", 00:17:45.401 "traddr": "10.0.0.1", 00:17:45.401 "trsvcid": "55270" 00:17:45.401 }, 00:17:45.401 "auth": { 00:17:45.401 "state": "completed", 00:17:45.401 "digest": "sha256", 00:17:45.401 "dhgroup": "ffdhe2048" 00:17:45.401 } 00:17:45.401 } 00:17:45.401 ]' 00:17:45.401 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.401 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.401 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.401 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:45.401 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.401 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.401 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.401 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.658 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:17:46.222 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.222 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:46.222 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.222 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.222 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.222 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.222 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:46.222 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:46.479 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:46.479 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.479 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:46.479 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:46.479 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:46.479 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.479 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.479 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.479 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.479 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.479 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.479 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.737 00:17:46.737 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.737 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.737 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.737 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.737 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.737 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.737 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.737 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.737 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.737 { 00:17:46.737 "cntlid": 11, 00:17:46.737 "qid": 0, 00:17:46.737 "state": "enabled", 00:17:46.737 "thread": "nvmf_tgt_poll_group_000", 00:17:46.737 "listen_address": { 00:17:46.737 "trtype": "TCP", 00:17:46.737 "adrfam": "IPv4", 00:17:46.737 "traddr": "10.0.0.2", 00:17:46.737 "trsvcid": "4420" 00:17:46.737 }, 00:17:46.737 "peer_address": { 00:17:46.737 "trtype": "TCP", 00:17:46.737 "adrfam": "IPv4", 00:17:46.737 "traddr": "10.0.0.1", 00:17:46.737 "trsvcid": "55296" 00:17:46.737 }, 00:17:46.737 "auth": { 00:17:46.737 "state": "completed", 00:17:46.737 "digest": "sha256", 00:17:46.737 "dhgroup": "ffdhe2048" 00:17:46.737 } 00:17:46.737 } 00:17:46.737 ]' 00:17:46.737 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.994 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.994 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.994 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.994 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.994 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.994 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.994 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.251 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:17:47.815 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.815 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:47.815 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.815 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.815 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.815 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.815 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:47.815 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:47.815 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:47.815 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.815 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.815 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:47.816 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:47.816 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.816 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.816 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.816 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.816 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.816 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.816 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.073 00:17:48.073 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.073 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.073 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.330 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.330 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.330 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.330 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.330 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.330 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.330 { 00:17:48.330 "cntlid": 13, 00:17:48.330 "qid": 0, 00:17:48.330 "state": "enabled", 00:17:48.330 "thread": "nvmf_tgt_poll_group_000", 00:17:48.330 "listen_address": { 00:17:48.330 "trtype": "TCP", 00:17:48.330 "adrfam": "IPv4", 00:17:48.330 "traddr": "10.0.0.2", 00:17:48.330 "trsvcid": "4420" 00:17:48.330 }, 00:17:48.330 "peer_address": { 00:17:48.330 "trtype": "TCP", 00:17:48.330 "adrfam": "IPv4", 00:17:48.330 "traddr": "10.0.0.1", 00:17:48.330 "trsvcid": "55322" 00:17:48.330 }, 00:17:48.330 "auth": { 00:17:48.330 "state": "completed", 00:17:48.330 "digest": "sha256", 00:17:48.330 "dhgroup": "ffdhe2048" 00:17:48.330 } 00:17:48.330 } 00:17:48.330 ]' 00:17:48.330 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.330 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.330 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.330 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.330 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.587 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.587 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.587 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.587 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:17:49.151 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.151 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:49.151 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.151 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.152 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.152 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.152 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:49.152 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:49.409 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:49.409 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.409 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.409 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:49.409 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:49.409 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.409 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:49.409 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.409 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.409 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.409 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.409 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.667 00:17:49.667 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.667 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.667 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.667 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.924 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.924 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.924 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.924 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.924 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.924 { 00:17:49.924 "cntlid": 15, 00:17:49.924 "qid": 0, 00:17:49.924 "state": "enabled", 00:17:49.924 "thread": "nvmf_tgt_poll_group_000", 00:17:49.924 "listen_address": { 00:17:49.924 "trtype": "TCP", 00:17:49.924 "adrfam": "IPv4", 00:17:49.924 "traddr": "10.0.0.2", 00:17:49.924 "trsvcid": "4420" 00:17:49.924 }, 00:17:49.924 "peer_address": { 00:17:49.924 "trtype": "TCP", 00:17:49.924 "adrfam": "IPv4", 00:17:49.924 "traddr": "10.0.0.1", 00:17:49.924 "trsvcid": "55354" 00:17:49.924 }, 00:17:49.924 "auth": { 00:17:49.924 "state": "completed", 00:17:49.924 "digest": "sha256", 00:17:49.924 "dhgroup": "ffdhe2048" 00:17:49.924 } 00:17:49.924 } 00:17:49.924 ]' 00:17:49.924 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.924 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.924 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.924 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.924 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.924 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.924 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.924 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.181 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.747 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.004 00:17:51.005 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.005 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.005 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.262 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.262 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.262 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.262 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.262 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.262 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.262 { 00:17:51.262 "cntlid": 17, 00:17:51.262 "qid": 0, 00:17:51.262 "state": "enabled", 00:17:51.262 "thread": "nvmf_tgt_poll_group_000", 00:17:51.262 "listen_address": { 00:17:51.262 "trtype": "TCP", 00:17:51.262 "adrfam": "IPv4", 00:17:51.262 "traddr": "10.0.0.2", 00:17:51.262 "trsvcid": "4420" 00:17:51.262 }, 00:17:51.262 "peer_address": { 00:17:51.262 "trtype": "TCP", 00:17:51.262 "adrfam": "IPv4", 00:17:51.262 "traddr": "10.0.0.1", 00:17:51.262 "trsvcid": "55382" 00:17:51.262 }, 00:17:51.262 "auth": { 00:17:51.262 "state": "completed", 00:17:51.262 "digest": "sha256", 00:17:51.262 "dhgroup": "ffdhe3072" 00:17:51.262 } 00:17:51.262 } 00:17:51.262 ]' 00:17:51.262 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.262 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.262 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.262 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.262 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.519 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.519 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.519 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.519 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:17:52.083 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.083 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:52.083 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.083 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.083 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.083 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.083 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.083 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.341 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:52.341 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.341 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.341 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:52.341 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:52.341 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.341 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.341 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.341 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.341 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.341 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.341 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.599 00:17:52.599 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.599 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.599 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.599 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.599 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.599 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.599 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.599 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.599 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.599 { 00:17:52.599 "cntlid": 19, 00:17:52.599 "qid": 0, 00:17:52.599 "state": "enabled", 00:17:52.599 "thread": "nvmf_tgt_poll_group_000", 00:17:52.599 "listen_address": { 00:17:52.599 "trtype": "TCP", 00:17:52.599 "adrfam": "IPv4", 00:17:52.599 "traddr": "10.0.0.2", 00:17:52.599 "trsvcid": "4420" 00:17:52.599 }, 00:17:52.599 "peer_address": { 00:17:52.599 "trtype": "TCP", 00:17:52.599 "adrfam": "IPv4", 00:17:52.599 "traddr": "10.0.0.1", 00:17:52.599 "trsvcid": "55714" 00:17:52.599 }, 00:17:52.599 "auth": { 00:17:52.599 "state": "completed", 00:17:52.599 "digest": "sha256", 00:17:52.599 "dhgroup": "ffdhe3072" 00:17:52.599 } 00:17:52.599 } 00:17:52.599 ]' 00:17:52.599 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.857 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.857 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.857 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.857 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.857 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.857 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.857 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.114 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.679 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.937 00:17:53.937 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.937 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.937 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.195 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.195 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.195 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.195 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.195 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.195 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.195 { 00:17:54.195 "cntlid": 21, 00:17:54.195 "qid": 0, 00:17:54.195 "state": "enabled", 00:17:54.195 "thread": "nvmf_tgt_poll_group_000", 00:17:54.195 "listen_address": { 00:17:54.195 "trtype": "TCP", 00:17:54.195 "adrfam": "IPv4", 00:17:54.195 "traddr": "10.0.0.2", 00:17:54.195 "trsvcid": "4420" 00:17:54.195 }, 00:17:54.195 "peer_address": { 00:17:54.195 "trtype": "TCP", 00:17:54.195 "adrfam": "IPv4", 00:17:54.195 "traddr": "10.0.0.1", 00:17:54.195 "trsvcid": "55738" 00:17:54.195 }, 00:17:54.195 "auth": { 00:17:54.195 "state": "completed", 00:17:54.195 "digest": "sha256", 00:17:54.195 "dhgroup": "ffdhe3072" 00:17:54.195 } 00:17:54.195 } 00:17:54.195 ]' 00:17:54.195 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.195 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.195 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.195 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.195 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.453 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.453 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.453 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.453 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:17:55.020 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.020 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:55.020 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.020 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.020 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.020 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.020 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:55.020 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:55.279 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:55.279 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.279 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.279 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:55.279 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:55.279 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.279 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:55.279 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.279 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.279 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.279 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.279 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.537 00:17:55.537 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.537 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.537 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.537 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.537 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.537 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.537 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.537 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.537 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.537 { 00:17:55.537 "cntlid": 23, 00:17:55.537 "qid": 0, 00:17:55.537 "state": "enabled", 00:17:55.537 "thread": "nvmf_tgt_poll_group_000", 00:17:55.537 "listen_address": { 00:17:55.537 "trtype": "TCP", 00:17:55.537 "adrfam": "IPv4", 00:17:55.537 "traddr": "10.0.0.2", 00:17:55.537 "trsvcid": "4420" 00:17:55.537 }, 00:17:55.537 "peer_address": { 00:17:55.537 "trtype": "TCP", 00:17:55.537 "adrfam": "IPv4", 00:17:55.537 "traddr": "10.0.0.1", 00:17:55.537 "trsvcid": "55750" 00:17:55.537 }, 00:17:55.537 "auth": { 00:17:55.537 "state": "completed", 00:17:55.537 "digest": "sha256", 00:17:55.537 "dhgroup": "ffdhe3072" 00:17:55.537 } 00:17:55.537 } 00:17:55.537 ]' 00:17:55.537 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.795 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.795 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.795 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:55.795 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.795 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.795 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.795 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.056 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:17:56.354 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.627 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:56.627 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.627 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.627 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.627 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.627 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.627 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:56.627 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:56.627 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:56.627 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.627 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.627 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:56.627 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:56.627 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.628 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.628 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.628 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.628 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.628 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.628 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.885 00:17:56.885 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.885 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.885 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.143 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.143 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.143 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.143 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.143 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.143 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.143 { 00:17:57.143 "cntlid": 25, 00:17:57.143 "qid": 0, 00:17:57.143 "state": "enabled", 00:17:57.143 "thread": "nvmf_tgt_poll_group_000", 00:17:57.143 "listen_address": { 00:17:57.143 "trtype": "TCP", 00:17:57.143 "adrfam": "IPv4", 00:17:57.143 "traddr": "10.0.0.2", 00:17:57.143 "trsvcid": "4420" 00:17:57.143 }, 00:17:57.143 "peer_address": { 00:17:57.143 "trtype": "TCP", 00:17:57.143 "adrfam": "IPv4", 00:17:57.143 "traddr": "10.0.0.1", 00:17:57.143 "trsvcid": "55794" 00:17:57.143 }, 00:17:57.143 "auth": { 00:17:57.143 "state": "completed", 00:17:57.143 "digest": "sha256", 00:17:57.143 "dhgroup": "ffdhe4096" 00:17:57.143 } 00:17:57.143 } 00:17:57.143 ]' 00:17:57.143 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.143 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.143 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.143 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.143 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.143 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.143 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.143 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.405 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:17:57.973 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.973 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:57.973 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.973 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.973 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.973 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.973 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:57.973 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:58.231 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:58.231 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.231 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.231 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:58.231 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:58.231 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.231 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.231 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.231 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.231 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.231 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.231 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.489 00:17:58.489 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.489 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.489 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.747 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.747 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.747 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.747 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.747 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.747 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.747 { 00:17:58.747 "cntlid": 27, 00:17:58.747 "qid": 0, 00:17:58.747 "state": "enabled", 00:17:58.747 "thread": "nvmf_tgt_poll_group_000", 00:17:58.747 "listen_address": { 00:17:58.747 "trtype": "TCP", 00:17:58.747 "adrfam": "IPv4", 00:17:58.747 "traddr": "10.0.0.2", 00:17:58.747 "trsvcid": "4420" 00:17:58.747 }, 00:17:58.747 "peer_address": { 00:17:58.747 "trtype": "TCP", 00:17:58.747 "adrfam": "IPv4", 00:17:58.747 "traddr": "10.0.0.1", 00:17:58.747 "trsvcid": "55816" 00:17:58.747 }, 00:17:58.747 "auth": { 00:17:58.747 "state": "completed", 00:17:58.747 "digest": "sha256", 00:17:58.747 "dhgroup": "ffdhe4096" 00:17:58.747 } 00:17:58.747 } 00:17:58.747 ]' 00:17:58.747 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.747 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.747 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.747 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.747 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.747 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.747 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.747 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.005 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:17:59.569 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.569 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:59.569 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.569 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.569 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.570 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.570 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:59.570 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:59.570 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:59.570 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.570 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.570 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:59.570 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:59.570 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.570 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.570 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.570 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.827 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.827 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.827 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.827 00:18:00.084 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.084 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.084 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.084 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.084 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.084 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.084 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.084 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.084 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.084 { 00:18:00.084 "cntlid": 29, 00:18:00.084 "qid": 0, 00:18:00.084 "state": "enabled", 00:18:00.084 "thread": "nvmf_tgt_poll_group_000", 00:18:00.084 "listen_address": { 00:18:00.084 "trtype": "TCP", 00:18:00.084 "adrfam": "IPv4", 00:18:00.084 "traddr": "10.0.0.2", 00:18:00.084 "trsvcid": "4420" 00:18:00.084 }, 00:18:00.084 "peer_address": { 00:18:00.084 "trtype": "TCP", 00:18:00.084 "adrfam": "IPv4", 00:18:00.084 "traddr": "10.0.0.1", 00:18:00.084 "trsvcid": "55836" 00:18:00.084 }, 00:18:00.084 "auth": { 00:18:00.084 "state": "completed", 00:18:00.084 "digest": "sha256", 00:18:00.084 "dhgroup": "ffdhe4096" 00:18:00.084 } 00:18:00.084 } 00:18:00.084 ]' 00:18:00.084 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.084 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.084 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.342 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.342 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.342 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.342 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.342 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.342 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:18:00.909 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.909 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:00.909 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.909 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.909 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.909 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.909 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:00.909 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:01.167 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:01.167 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.167 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.167 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:01.167 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:01.167 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.167 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:01.167 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.167 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.167 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.167 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.167 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.424 00:18:01.424 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.424 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.424 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.682 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.682 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.682 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.682 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.682 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.682 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.682 { 00:18:01.682 "cntlid": 31, 00:18:01.682 "qid": 0, 00:18:01.682 "state": "enabled", 00:18:01.682 "thread": "nvmf_tgt_poll_group_000", 00:18:01.682 "listen_address": { 00:18:01.682 "trtype": "TCP", 00:18:01.682 "adrfam": "IPv4", 00:18:01.682 "traddr": "10.0.0.2", 00:18:01.682 "trsvcid": "4420" 00:18:01.682 }, 00:18:01.682 "peer_address": { 00:18:01.682 "trtype": "TCP", 00:18:01.682 "adrfam": "IPv4", 00:18:01.682 "traddr": "10.0.0.1", 00:18:01.682 "trsvcid": "55870" 00:18:01.682 }, 00:18:01.682 "auth": { 00:18:01.682 "state": "completed", 00:18:01.682 "digest": "sha256", 00:18:01.682 "dhgroup": "ffdhe4096" 00:18:01.682 } 00:18:01.682 } 00:18:01.682 ]' 00:18:01.682 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.682 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.682 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.682 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.682 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.682 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.682 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.682 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.940 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:18:02.506 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.506 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:02.506 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.506 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.506 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.506 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.506 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.506 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:02.506 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:02.763 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:02.763 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.763 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.763 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:02.763 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:02.763 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.763 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.763 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.763 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.763 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.763 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.764 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.021 00:18:03.021 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.021 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.021 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.279 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.279 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.279 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.279 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.279 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.279 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.279 { 00:18:03.279 "cntlid": 33, 00:18:03.279 "qid": 0, 00:18:03.279 "state": "enabled", 00:18:03.279 "thread": "nvmf_tgt_poll_group_000", 00:18:03.279 "listen_address": { 00:18:03.279 "trtype": "TCP", 00:18:03.279 "adrfam": "IPv4", 00:18:03.279 "traddr": "10.0.0.2", 00:18:03.279 "trsvcid": "4420" 00:18:03.279 }, 00:18:03.279 "peer_address": { 00:18:03.279 "trtype": "TCP", 00:18:03.279 "adrfam": "IPv4", 00:18:03.279 "traddr": "10.0.0.1", 00:18:03.279 "trsvcid": "45156" 00:18:03.279 }, 00:18:03.279 "auth": { 00:18:03.279 "state": "completed", 00:18:03.279 "digest": "sha256", 00:18:03.279 "dhgroup": "ffdhe6144" 00:18:03.279 } 00:18:03.279 } 00:18:03.279 ]' 00:18:03.279 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.279 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.279 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.279 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.279 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.279 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.279 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.279 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.536 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:18:04.101 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.101 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:04.101 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.101 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.101 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.101 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.101 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:04.101 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:04.359 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:04.359 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.359 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.359 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:04.359 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:04.359 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.359 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.359 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.359 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.359 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.359 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.359 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.617 00:18:04.617 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.617 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.617 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.874 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.874 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.874 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.874 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.874 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.874 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.874 { 00:18:04.874 "cntlid": 35, 00:18:04.874 "qid": 0, 00:18:04.874 "state": "enabled", 00:18:04.874 "thread": "nvmf_tgt_poll_group_000", 00:18:04.874 "listen_address": { 00:18:04.874 "trtype": "TCP", 00:18:04.874 "adrfam": "IPv4", 00:18:04.874 "traddr": "10.0.0.2", 00:18:04.874 "trsvcid": "4420" 00:18:04.874 }, 00:18:04.874 "peer_address": { 00:18:04.874 "trtype": "TCP", 00:18:04.874 "adrfam": "IPv4", 00:18:04.874 "traddr": "10.0.0.1", 00:18:04.874 "trsvcid": "45174" 00:18:04.874 }, 00:18:04.874 "auth": { 00:18:04.874 "state": "completed", 00:18:04.874 "digest": "sha256", 00:18:04.874 "dhgroup": "ffdhe6144" 00:18:04.874 } 00:18:04.874 } 00:18:04.874 ]' 00:18:04.874 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.874 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.874 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.874 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.874 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.874 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.875 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.875 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.132 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:18:05.697 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.697 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:05.697 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.697 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.697 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.697 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.697 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:05.697 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:05.955 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:05.955 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.955 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.955 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.955 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:05.955 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.955 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.955 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.955 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.955 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.955 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.955 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.212 00:18:06.212 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.212 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.212 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.469 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.469 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.469 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.470 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.470 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.470 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.470 { 00:18:06.470 "cntlid": 37, 00:18:06.470 "qid": 0, 00:18:06.470 "state": "enabled", 00:18:06.470 "thread": "nvmf_tgt_poll_group_000", 00:18:06.470 "listen_address": { 00:18:06.470 "trtype": "TCP", 00:18:06.470 "adrfam": "IPv4", 00:18:06.470 "traddr": "10.0.0.2", 00:18:06.470 "trsvcid": "4420" 00:18:06.470 }, 00:18:06.470 "peer_address": { 00:18:06.470 "trtype": "TCP", 00:18:06.470 "adrfam": "IPv4", 00:18:06.470 "traddr": "10.0.0.1", 00:18:06.470 "trsvcid": "45206" 00:18:06.470 }, 00:18:06.470 "auth": { 00:18:06.470 "state": "completed", 00:18:06.470 "digest": "sha256", 00:18:06.470 "dhgroup": "ffdhe6144" 00:18:06.470 } 00:18:06.470 } 00:18:06.470 ]' 00:18:06.470 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.470 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.470 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.470 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.470 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.470 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.470 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.470 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.727 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.292 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.858 00:18:07.858 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.859 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.859 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.859 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.859 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.859 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.859 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.859 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.859 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.859 { 00:18:07.859 "cntlid": 39, 00:18:07.859 "qid": 0, 00:18:07.859 "state": "enabled", 00:18:07.859 "thread": "nvmf_tgt_poll_group_000", 00:18:07.859 "listen_address": { 00:18:07.859 "trtype": "TCP", 00:18:07.859 "adrfam": "IPv4", 00:18:07.859 "traddr": "10.0.0.2", 00:18:07.859 "trsvcid": "4420" 00:18:07.859 }, 00:18:07.859 "peer_address": { 00:18:07.859 "trtype": "TCP", 00:18:07.859 "adrfam": "IPv4", 00:18:07.859 "traddr": "10.0.0.1", 00:18:07.859 "trsvcid": "45234" 00:18:07.859 }, 00:18:07.859 "auth": { 00:18:07.859 "state": "completed", 00:18:07.859 "digest": "sha256", 00:18:07.859 "dhgroup": "ffdhe6144" 00:18:07.859 } 00:18:07.859 } 00:18:07.859 ]' 00:18:07.859 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.117 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.117 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.117 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.117 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.117 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.117 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.117 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.374 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.940 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.506 00:18:09.506 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.506 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.506 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.506 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.506 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.506 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.506 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.777 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.777 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.777 { 00:18:09.777 "cntlid": 41, 00:18:09.777 "qid": 0, 00:18:09.777 "state": "enabled", 00:18:09.778 "thread": "nvmf_tgt_poll_group_000", 00:18:09.778 "listen_address": { 00:18:09.778 "trtype": "TCP", 00:18:09.778 "adrfam": "IPv4", 00:18:09.778 "traddr": "10.0.0.2", 00:18:09.778 "trsvcid": "4420" 00:18:09.778 }, 00:18:09.778 "peer_address": { 00:18:09.778 "trtype": "TCP", 00:18:09.778 "adrfam": "IPv4", 00:18:09.778 "traddr": "10.0.0.1", 00:18:09.778 "trsvcid": "45258" 00:18:09.778 }, 00:18:09.778 "auth": { 00:18:09.778 "state": "completed", 00:18:09.778 "digest": "sha256", 00:18:09.778 "dhgroup": "ffdhe8192" 00:18:09.778 } 00:18:09.778 } 00:18:09.778 ]' 00:18:09.778 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.778 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.778 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.778 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.778 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.778 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.778 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.778 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.044 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.611 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.612 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.177 00:18:11.177 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.177 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.177 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.435 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.435 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.435 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.435 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.435 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.435 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.435 { 00:18:11.435 "cntlid": 43, 00:18:11.435 "qid": 0, 00:18:11.435 "state": "enabled", 00:18:11.435 "thread": "nvmf_tgt_poll_group_000", 00:18:11.435 "listen_address": { 00:18:11.435 "trtype": "TCP", 00:18:11.435 "adrfam": "IPv4", 00:18:11.435 "traddr": "10.0.0.2", 00:18:11.435 "trsvcid": "4420" 00:18:11.435 }, 00:18:11.435 "peer_address": { 00:18:11.435 "trtype": "TCP", 00:18:11.435 "adrfam": "IPv4", 00:18:11.435 "traddr": "10.0.0.1", 00:18:11.435 "trsvcid": "45300" 00:18:11.435 }, 00:18:11.435 "auth": { 00:18:11.435 "state": "completed", 00:18:11.435 "digest": "sha256", 00:18:11.435 "dhgroup": "ffdhe8192" 00:18:11.435 } 00:18:11.435 } 00:18:11.435 ]' 00:18:11.435 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.435 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.435 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.435 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.435 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.435 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.435 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.435 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.693 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.258 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.822 00:18:12.822 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.822 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.823 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.080 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.080 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.080 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.080 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.080 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.080 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.080 { 00:18:13.080 "cntlid": 45, 00:18:13.080 "qid": 0, 00:18:13.080 "state": "enabled", 00:18:13.080 "thread": "nvmf_tgt_poll_group_000", 00:18:13.080 "listen_address": { 00:18:13.080 "trtype": "TCP", 00:18:13.080 "adrfam": "IPv4", 00:18:13.080 "traddr": "10.0.0.2", 00:18:13.080 "trsvcid": "4420" 00:18:13.080 }, 00:18:13.080 "peer_address": { 00:18:13.080 "trtype": "TCP", 00:18:13.080 "adrfam": "IPv4", 00:18:13.080 "traddr": "10.0.0.1", 00:18:13.080 "trsvcid": "60644" 00:18:13.080 }, 00:18:13.080 "auth": { 00:18:13.080 "state": "completed", 00:18:13.080 "digest": "sha256", 00:18:13.080 "dhgroup": "ffdhe8192" 00:18:13.080 } 00:18:13.080 } 00:18:13.080 ]' 00:18:13.080 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.080 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.080 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.081 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.081 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.081 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.081 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.081 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.338 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:18:13.902 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.902 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:13.902 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.902 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.902 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.902 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.902 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:13.902 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:14.159 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:14.159 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.159 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.159 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:14.159 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:14.159 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.159 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:14.159 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.159 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.159 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.159 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.159 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.416 00:18:14.416 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.416 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.416 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.673 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.673 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.673 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.673 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.673 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.673 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.673 { 00:18:14.673 "cntlid": 47, 00:18:14.673 "qid": 0, 00:18:14.673 "state": "enabled", 00:18:14.673 "thread": "nvmf_tgt_poll_group_000", 00:18:14.673 "listen_address": { 00:18:14.673 "trtype": "TCP", 00:18:14.673 "adrfam": "IPv4", 00:18:14.673 "traddr": "10.0.0.2", 00:18:14.673 "trsvcid": "4420" 00:18:14.673 }, 00:18:14.673 "peer_address": { 00:18:14.673 "trtype": "TCP", 00:18:14.673 "adrfam": "IPv4", 00:18:14.673 "traddr": "10.0.0.1", 00:18:14.673 "trsvcid": "60668" 00:18:14.673 }, 00:18:14.673 "auth": { 00:18:14.673 "state": "completed", 00:18:14.673 "digest": "sha256", 00:18:14.673 "dhgroup": "ffdhe8192" 00:18:14.673 } 00:18:14.673 } 00:18:14.673 ]' 00:18:14.673 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.673 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.673 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.930 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.930 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.930 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.930 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.930 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.930 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:18:15.494 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.494 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:15.494 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.494 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.494 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.494 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:15.494 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.494 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.494 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.494 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.751 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:15.751 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.751 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.751 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:15.751 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.751 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.751 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.751 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.751 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.751 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.751 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.751 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.009 00:18:16.009 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.009 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.009 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.267 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.267 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.267 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.267 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.267 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.267 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.267 { 00:18:16.267 "cntlid": 49, 00:18:16.267 "qid": 0, 00:18:16.267 "state": "enabled", 00:18:16.267 "thread": "nvmf_tgt_poll_group_000", 00:18:16.267 "listen_address": { 00:18:16.267 "trtype": "TCP", 00:18:16.267 "adrfam": "IPv4", 00:18:16.267 "traddr": "10.0.0.2", 00:18:16.267 "trsvcid": "4420" 00:18:16.267 }, 00:18:16.267 "peer_address": { 00:18:16.267 "trtype": "TCP", 00:18:16.267 "adrfam": "IPv4", 00:18:16.268 "traddr": "10.0.0.1", 00:18:16.268 "trsvcid": "60696" 00:18:16.268 }, 00:18:16.268 "auth": { 00:18:16.268 "state": "completed", 00:18:16.268 "digest": "sha384", 00:18:16.268 "dhgroup": "null" 00:18:16.268 } 00:18:16.268 } 00:18:16.268 ]' 00:18:16.268 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.268 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.268 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.268 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:16.268 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.268 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.268 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.268 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.526 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:18:17.090 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.090 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:17.090 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.090 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.090 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.090 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.090 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:17.090 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:17.090 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:17.091 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.091 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.091 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:17.091 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:17.091 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.091 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.091 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.091 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.091 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.091 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.091 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.348 00:18:17.348 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.348 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.348 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.606 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.606 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.606 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.606 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.606 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.606 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.606 { 00:18:17.606 "cntlid": 51, 00:18:17.606 "qid": 0, 00:18:17.606 "state": "enabled", 00:18:17.606 "thread": "nvmf_tgt_poll_group_000", 00:18:17.606 "listen_address": { 00:18:17.606 "trtype": "TCP", 00:18:17.606 "adrfam": "IPv4", 00:18:17.606 "traddr": "10.0.0.2", 00:18:17.606 "trsvcid": "4420" 00:18:17.606 }, 00:18:17.606 "peer_address": { 00:18:17.606 "trtype": "TCP", 00:18:17.606 "adrfam": "IPv4", 00:18:17.606 "traddr": "10.0.0.1", 00:18:17.606 "trsvcid": "60716" 00:18:17.606 }, 00:18:17.606 "auth": { 00:18:17.606 "state": "completed", 00:18:17.606 "digest": "sha384", 00:18:17.606 "dhgroup": "null" 00:18:17.606 } 00:18:17.606 } 00:18:17.606 ]' 00:18:17.606 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.606 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.606 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.606 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:17.606 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.863 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.863 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.863 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.863 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:18:18.429 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.429 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:18.429 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.429 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.429 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.429 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.429 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:18.429 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:18.687 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:18.687 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.687 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:18.687 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:18.687 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:18.687 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.687 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.687 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.687 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.687 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.687 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.687 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.945 00:18:18.945 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.945 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.945 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.945 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.945 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.945 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.945 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.945 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.945 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.945 { 00:18:18.945 "cntlid": 53, 00:18:18.945 "qid": 0, 00:18:18.945 "state": "enabled", 00:18:18.945 "thread": "nvmf_tgt_poll_group_000", 00:18:18.945 "listen_address": { 00:18:18.945 "trtype": "TCP", 00:18:18.945 "adrfam": "IPv4", 00:18:18.945 "traddr": "10.0.0.2", 00:18:18.945 "trsvcid": "4420" 00:18:18.945 }, 00:18:18.945 "peer_address": { 00:18:18.945 "trtype": "TCP", 00:18:18.945 "adrfam": "IPv4", 00:18:18.945 "traddr": "10.0.0.1", 00:18:18.945 "trsvcid": "60740" 00:18:18.945 }, 00:18:18.945 "auth": { 00:18:18.945 "state": "completed", 00:18:18.945 "digest": "sha384", 00:18:18.945 "dhgroup": "null" 00:18:18.945 } 00:18:18.945 } 00:18:18.945 ]' 00:18:18.945 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.203 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.203 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.203 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:19.203 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.203 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.203 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.203 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.460 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.025 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.282 00:18:20.282 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.282 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.282 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.539 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.539 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.539 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.539 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.539 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.539 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.539 { 00:18:20.539 "cntlid": 55, 00:18:20.539 "qid": 0, 00:18:20.539 "state": "enabled", 00:18:20.539 "thread": "nvmf_tgt_poll_group_000", 00:18:20.539 "listen_address": { 00:18:20.539 "trtype": "TCP", 00:18:20.539 "adrfam": "IPv4", 00:18:20.539 "traddr": "10.0.0.2", 00:18:20.539 "trsvcid": "4420" 00:18:20.539 }, 00:18:20.539 "peer_address": { 00:18:20.539 "trtype": "TCP", 00:18:20.539 "adrfam": "IPv4", 00:18:20.539 "traddr": "10.0.0.1", 00:18:20.539 "trsvcid": "60774" 00:18:20.539 }, 00:18:20.539 "auth": { 00:18:20.539 "state": "completed", 00:18:20.539 "digest": "sha384", 00:18:20.539 "dhgroup": "null" 00:18:20.539 } 00:18:20.539 } 00:18:20.539 ]' 00:18:20.539 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.539 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.539 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.539 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:20.539 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.539 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.539 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.539 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.795 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:18:21.356 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.356 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:21.356 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.356 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.356 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.356 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.356 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.356 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:21.356 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:21.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:21.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:21.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:21.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.613 00:18:21.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.869 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.869 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.869 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.869 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.869 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.869 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.869 { 00:18:21.869 "cntlid": 57, 00:18:21.869 "qid": 0, 00:18:21.869 "state": "enabled", 00:18:21.869 "thread": "nvmf_tgt_poll_group_000", 00:18:21.869 "listen_address": { 00:18:21.869 "trtype": "TCP", 00:18:21.869 "adrfam": "IPv4", 00:18:21.869 "traddr": "10.0.0.2", 00:18:21.869 "trsvcid": "4420" 00:18:21.869 }, 00:18:21.869 "peer_address": { 00:18:21.869 "trtype": "TCP", 00:18:21.869 "adrfam": "IPv4", 00:18:21.869 "traddr": "10.0.0.1", 00:18:21.869 "trsvcid": "60812" 00:18:21.869 }, 00:18:21.869 "auth": { 00:18:21.869 "state": "completed", 00:18:21.869 "digest": "sha384", 00:18:21.869 "dhgroup": "ffdhe2048" 00:18:21.869 } 00:18:21.869 } 00:18:21.869 ]' 00:18:21.869 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.869 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.869 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.127 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.127 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.127 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.127 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.127 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.127 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:18:22.691 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.691 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:22.691 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.691 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.691 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.691 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.691 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:22.691 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:22.948 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:22.948 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.948 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.948 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:22.948 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:22.948 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.948 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.948 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.948 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.948 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.948 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.948 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.206 00:18:23.206 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.206 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.206 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.464 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.464 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.464 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.464 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.464 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.464 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.464 { 00:18:23.464 "cntlid": 59, 00:18:23.464 "qid": 0, 00:18:23.464 "state": "enabled", 00:18:23.464 "thread": "nvmf_tgt_poll_group_000", 00:18:23.464 "listen_address": { 00:18:23.464 "trtype": "TCP", 00:18:23.464 "adrfam": "IPv4", 00:18:23.464 "traddr": "10.0.0.2", 00:18:23.464 "trsvcid": "4420" 00:18:23.464 }, 00:18:23.464 "peer_address": { 00:18:23.464 "trtype": "TCP", 00:18:23.464 "adrfam": "IPv4", 00:18:23.464 "traddr": "10.0.0.1", 00:18:23.464 "trsvcid": "49758" 00:18:23.464 }, 00:18:23.464 "auth": { 00:18:23.464 "state": "completed", 00:18:23.464 "digest": "sha384", 00:18:23.464 "dhgroup": "ffdhe2048" 00:18:23.464 } 00:18:23.464 } 00:18:23.464 ]' 00:18:23.464 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.464 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.464 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.464 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:23.464 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.464 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.464 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.464 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.722 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.288 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.546 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.546 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.546 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.546 00:18:24.546 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.546 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.546 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.804 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.804 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.804 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.805 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.805 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.805 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.805 { 00:18:24.805 "cntlid": 61, 00:18:24.805 "qid": 0, 00:18:24.805 "state": "enabled", 00:18:24.805 "thread": "nvmf_tgt_poll_group_000", 00:18:24.805 "listen_address": { 00:18:24.805 "trtype": "TCP", 00:18:24.805 "adrfam": "IPv4", 00:18:24.805 "traddr": "10.0.0.2", 00:18:24.805 "trsvcid": "4420" 00:18:24.805 }, 00:18:24.805 "peer_address": { 00:18:24.805 "trtype": "TCP", 00:18:24.805 "adrfam": "IPv4", 00:18:24.805 "traddr": "10.0.0.1", 00:18:24.805 "trsvcid": "49780" 00:18:24.805 }, 00:18:24.805 "auth": { 00:18:24.805 "state": "completed", 00:18:24.805 "digest": "sha384", 00:18:24.805 "dhgroup": "ffdhe2048" 00:18:24.805 } 00:18:24.805 } 00:18:24.805 ]' 00:18:24.805 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.805 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.805 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.805 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:24.805 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.063 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.063 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.063 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.063 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:18:25.629 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.629 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:25.629 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.629 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.629 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.629 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.629 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:25.629 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:25.886 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:25.886 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.886 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.887 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:25.887 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:25.887 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.887 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:25.887 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.887 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.887 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.887 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.887 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.144 00:18:26.144 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.144 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.144 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.403 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.403 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.403 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.403 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.403 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.403 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.403 { 00:18:26.403 "cntlid": 63, 00:18:26.403 "qid": 0, 00:18:26.403 "state": "enabled", 00:18:26.403 "thread": "nvmf_tgt_poll_group_000", 00:18:26.403 "listen_address": { 00:18:26.403 "trtype": "TCP", 00:18:26.403 "adrfam": "IPv4", 00:18:26.403 "traddr": "10.0.0.2", 00:18:26.403 "trsvcid": "4420" 00:18:26.403 }, 00:18:26.403 "peer_address": { 00:18:26.403 "trtype": "TCP", 00:18:26.403 "adrfam": "IPv4", 00:18:26.403 "traddr": "10.0.0.1", 00:18:26.403 "trsvcid": "49810" 00:18:26.403 }, 00:18:26.403 "auth": { 00:18:26.403 "state": "completed", 00:18:26.403 "digest": "sha384", 00:18:26.403 "dhgroup": "ffdhe2048" 00:18:26.403 } 00:18:26.403 } 00:18:26.403 ]' 00:18:26.403 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.403 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.403 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.403 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:26.403 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.403 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.403 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.403 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.662 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.228 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.486 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.486 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.486 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.486 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.486 00:18:27.486 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.486 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.486 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.745 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.745 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.745 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.745 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.745 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.745 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.745 { 00:18:27.745 "cntlid": 65, 00:18:27.745 "qid": 0, 00:18:27.745 "state": "enabled", 00:18:27.745 "thread": "nvmf_tgt_poll_group_000", 00:18:27.745 "listen_address": { 00:18:27.745 "trtype": "TCP", 00:18:27.745 "adrfam": "IPv4", 00:18:27.745 "traddr": "10.0.0.2", 00:18:27.745 "trsvcid": "4420" 00:18:27.745 }, 00:18:27.745 "peer_address": { 00:18:27.746 "trtype": "TCP", 00:18:27.746 "adrfam": "IPv4", 00:18:27.746 "traddr": "10.0.0.1", 00:18:27.746 "trsvcid": "49836" 00:18:27.746 }, 00:18:27.746 "auth": { 00:18:27.746 "state": "completed", 00:18:27.746 "digest": "sha384", 00:18:27.746 "dhgroup": "ffdhe3072" 00:18:27.746 } 00:18:27.746 } 00:18:27.746 ]' 00:18:27.746 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.746 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.746 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.004 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.004 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.004 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.004 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.004 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.004 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:18:28.572 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.572 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:28.572 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.572 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.572 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.572 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.572 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:28.572 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:28.831 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:28.831 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.831 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.831 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:28.831 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:28.831 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.831 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.831 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.831 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.831 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.831 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.831 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.089 00:18:29.089 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.089 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.089 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.348 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.348 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.348 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.348 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.348 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.348 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.348 { 00:18:29.348 "cntlid": 67, 00:18:29.348 "qid": 0, 00:18:29.348 "state": "enabled", 00:18:29.348 "thread": "nvmf_tgt_poll_group_000", 00:18:29.348 "listen_address": { 00:18:29.348 "trtype": "TCP", 00:18:29.348 "adrfam": "IPv4", 00:18:29.348 "traddr": "10.0.0.2", 00:18:29.348 "trsvcid": "4420" 00:18:29.348 }, 00:18:29.348 "peer_address": { 00:18:29.348 "trtype": "TCP", 00:18:29.348 "adrfam": "IPv4", 00:18:29.348 "traddr": "10.0.0.1", 00:18:29.348 "trsvcid": "49874" 00:18:29.348 }, 00:18:29.348 "auth": { 00:18:29.348 "state": "completed", 00:18:29.348 "digest": "sha384", 00:18:29.348 "dhgroup": "ffdhe3072" 00:18:29.348 } 00:18:29.348 } 00:18:29.348 ]' 00:18:29.348 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.348 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.348 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.348 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:29.348 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.348 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.348 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.348 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.607 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.173 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.432 00:18:30.432 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.432 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.432 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.691 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.691 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.691 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.691 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.691 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.691 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.691 { 00:18:30.691 "cntlid": 69, 00:18:30.691 "qid": 0, 00:18:30.691 "state": "enabled", 00:18:30.691 "thread": "nvmf_tgt_poll_group_000", 00:18:30.691 "listen_address": { 00:18:30.691 "trtype": "TCP", 00:18:30.691 "adrfam": "IPv4", 00:18:30.691 "traddr": "10.0.0.2", 00:18:30.691 "trsvcid": "4420" 00:18:30.691 }, 00:18:30.691 "peer_address": { 00:18:30.691 "trtype": "TCP", 00:18:30.691 "adrfam": "IPv4", 00:18:30.691 "traddr": "10.0.0.1", 00:18:30.691 "trsvcid": "49900" 00:18:30.691 }, 00:18:30.691 "auth": { 00:18:30.691 "state": "completed", 00:18:30.691 "digest": "sha384", 00:18:30.691 "dhgroup": "ffdhe3072" 00:18:30.691 } 00:18:30.691 } 00:18:30.691 ]' 00:18:30.691 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.691 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.691 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.691 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:30.691 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.949 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.949 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.949 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.949 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:18:31.516 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.516 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:31.516 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.516 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.516 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.516 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.516 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.516 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.777 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:31.777 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.777 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.777 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:31.777 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:31.777 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.777 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:31.777 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.777 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.777 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.778 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.778 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.036 00:18:32.036 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.036 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.036 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.036 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.036 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.036 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.036 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.295 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.295 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.295 { 00:18:32.295 "cntlid": 71, 00:18:32.295 "qid": 0, 00:18:32.295 "state": "enabled", 00:18:32.295 "thread": "nvmf_tgt_poll_group_000", 00:18:32.295 "listen_address": { 00:18:32.295 "trtype": "TCP", 00:18:32.295 "adrfam": "IPv4", 00:18:32.295 "traddr": "10.0.0.2", 00:18:32.295 "trsvcid": "4420" 00:18:32.295 }, 00:18:32.295 "peer_address": { 00:18:32.295 "trtype": "TCP", 00:18:32.295 "adrfam": "IPv4", 00:18:32.295 "traddr": "10.0.0.1", 00:18:32.295 "trsvcid": "49928" 00:18:32.295 }, 00:18:32.295 "auth": { 00:18:32.295 "state": "completed", 00:18:32.295 "digest": "sha384", 00:18:32.295 "dhgroup": "ffdhe3072" 00:18:32.295 } 00:18:32.295 } 00:18:32.295 ]' 00:18:32.295 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.295 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.295 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.295 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:32.295 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.295 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.295 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.295 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.553 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.129 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.389 00:18:33.389 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.389 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.389 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.647 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.647 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.647 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.647 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.647 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.647 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.647 { 00:18:33.647 "cntlid": 73, 00:18:33.647 "qid": 0, 00:18:33.647 "state": "enabled", 00:18:33.647 "thread": "nvmf_tgt_poll_group_000", 00:18:33.647 "listen_address": { 00:18:33.647 "trtype": "TCP", 00:18:33.647 "adrfam": "IPv4", 00:18:33.647 "traddr": "10.0.0.2", 00:18:33.647 "trsvcid": "4420" 00:18:33.647 }, 00:18:33.647 "peer_address": { 00:18:33.647 "trtype": "TCP", 00:18:33.647 "adrfam": "IPv4", 00:18:33.647 "traddr": "10.0.0.1", 00:18:33.647 "trsvcid": "57480" 00:18:33.647 }, 00:18:33.647 "auth": { 00:18:33.647 "state": "completed", 00:18:33.647 "digest": "sha384", 00:18:33.647 "dhgroup": "ffdhe4096" 00:18:33.647 } 00:18:33.647 } 00:18:33.647 ]' 00:18:33.647 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.647 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.647 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.647 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:33.647 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.647 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.647 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.647 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.906 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:18:34.473 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.473 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:34.473 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.473 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.473 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.473 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.473 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:34.473 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:34.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:34.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:34.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.990 00:18:34.990 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.990 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.990 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.249 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.249 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.249 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.249 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.249 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.249 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.249 { 00:18:35.249 "cntlid": 75, 00:18:35.249 "qid": 0, 00:18:35.249 "state": "enabled", 00:18:35.249 "thread": "nvmf_tgt_poll_group_000", 00:18:35.249 "listen_address": { 00:18:35.249 "trtype": "TCP", 00:18:35.249 "adrfam": "IPv4", 00:18:35.249 "traddr": "10.0.0.2", 00:18:35.249 "trsvcid": "4420" 00:18:35.249 }, 00:18:35.249 "peer_address": { 00:18:35.249 "trtype": "TCP", 00:18:35.249 "adrfam": "IPv4", 00:18:35.249 "traddr": "10.0.0.1", 00:18:35.249 "trsvcid": "57504" 00:18:35.249 }, 00:18:35.249 "auth": { 00:18:35.249 "state": "completed", 00:18:35.249 "digest": "sha384", 00:18:35.249 "dhgroup": "ffdhe4096" 00:18:35.249 } 00:18:35.249 } 00:18:35.249 ]' 00:18:35.249 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.249 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.249 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.249 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:35.249 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.249 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.249 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.249 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.508 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.075 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.333 00:18:36.333 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.333 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.333 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.592 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.592 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.592 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.592 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.592 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.592 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.592 { 00:18:36.592 "cntlid": 77, 00:18:36.592 "qid": 0, 00:18:36.592 "state": "enabled", 00:18:36.592 "thread": "nvmf_tgt_poll_group_000", 00:18:36.592 "listen_address": { 00:18:36.592 "trtype": "TCP", 00:18:36.592 "adrfam": "IPv4", 00:18:36.592 "traddr": "10.0.0.2", 00:18:36.592 "trsvcid": "4420" 00:18:36.592 }, 00:18:36.592 "peer_address": { 00:18:36.592 "trtype": "TCP", 00:18:36.592 "adrfam": "IPv4", 00:18:36.592 "traddr": "10.0.0.1", 00:18:36.592 "trsvcid": "57530" 00:18:36.592 }, 00:18:36.592 "auth": { 00:18:36.592 "state": "completed", 00:18:36.592 "digest": "sha384", 00:18:36.592 "dhgroup": "ffdhe4096" 00:18:36.592 } 00:18:36.592 } 00:18:36.592 ]' 00:18:36.592 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.592 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.592 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.850 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:36.850 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.850 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.850 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.850 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.850 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:18:37.417 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.417 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:37.417 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.417 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.417 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.417 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.417 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:37.417 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:37.675 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:37.675 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.675 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.675 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:37.675 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:37.675 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.675 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:37.675 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.676 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.676 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.676 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.676 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.934 00:18:37.934 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.934 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.934 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.194 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.194 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.194 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.194 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.194 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.194 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.194 { 00:18:38.194 "cntlid": 79, 00:18:38.194 "qid": 0, 00:18:38.194 "state": "enabled", 00:18:38.194 "thread": "nvmf_tgt_poll_group_000", 00:18:38.194 "listen_address": { 00:18:38.194 "trtype": "TCP", 00:18:38.194 "adrfam": "IPv4", 00:18:38.194 "traddr": "10.0.0.2", 00:18:38.194 "trsvcid": "4420" 00:18:38.194 }, 00:18:38.194 "peer_address": { 00:18:38.194 "trtype": "TCP", 00:18:38.194 "adrfam": "IPv4", 00:18:38.194 "traddr": "10.0.0.1", 00:18:38.194 "trsvcid": "57554" 00:18:38.194 }, 00:18:38.194 "auth": { 00:18:38.194 "state": "completed", 00:18:38.194 "digest": "sha384", 00:18:38.194 "dhgroup": "ffdhe4096" 00:18:38.194 } 00:18:38.194 } 00:18:38.194 ]' 00:18:38.194 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.194 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.194 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.194 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:38.194 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.194 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.194 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.194 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.452 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.019 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.587 00:18:39.587 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.587 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.587 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.587 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.587 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.587 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.587 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.587 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.587 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.587 { 00:18:39.587 "cntlid": 81, 00:18:39.587 "qid": 0, 00:18:39.587 "state": "enabled", 00:18:39.587 "thread": "nvmf_tgt_poll_group_000", 00:18:39.587 "listen_address": { 00:18:39.587 "trtype": "TCP", 00:18:39.587 "adrfam": "IPv4", 00:18:39.587 "traddr": "10.0.0.2", 00:18:39.587 "trsvcid": "4420" 00:18:39.588 }, 00:18:39.588 "peer_address": { 00:18:39.588 "trtype": "TCP", 00:18:39.588 "adrfam": "IPv4", 00:18:39.588 "traddr": "10.0.0.1", 00:18:39.588 "trsvcid": "57598" 00:18:39.588 }, 00:18:39.588 "auth": { 00:18:39.588 "state": "completed", 00:18:39.588 "digest": "sha384", 00:18:39.588 "dhgroup": "ffdhe6144" 00:18:39.588 } 00:18:39.588 } 00:18:39.588 ]' 00:18:39.588 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.588 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.588 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.855 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:39.855 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.855 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.855 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.855 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.855 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:18:40.425 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.425 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:40.425 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.425 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.425 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.425 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.425 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.425 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.683 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:40.684 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.684 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.684 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:40.684 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:40.684 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.684 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.684 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.684 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.684 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.684 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.684 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.942 00:18:40.942 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.942 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.942 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.201 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.201 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.201 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.201 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.201 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.201 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.201 { 00:18:41.201 "cntlid": 83, 00:18:41.201 "qid": 0, 00:18:41.201 "state": "enabled", 00:18:41.201 "thread": "nvmf_tgt_poll_group_000", 00:18:41.201 "listen_address": { 00:18:41.201 "trtype": "TCP", 00:18:41.201 "adrfam": "IPv4", 00:18:41.201 "traddr": "10.0.0.2", 00:18:41.201 "trsvcid": "4420" 00:18:41.201 }, 00:18:41.201 "peer_address": { 00:18:41.201 "trtype": "TCP", 00:18:41.201 "adrfam": "IPv4", 00:18:41.201 "traddr": "10.0.0.1", 00:18:41.201 "trsvcid": "57620" 00:18:41.201 }, 00:18:41.201 "auth": { 00:18:41.201 "state": "completed", 00:18:41.201 "digest": "sha384", 00:18:41.201 "dhgroup": "ffdhe6144" 00:18:41.201 } 00:18:41.201 } 00:18:41.201 ]' 00:18:41.201 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.201 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.201 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.201 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:41.201 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.201 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.201 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.201 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.459 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:18:42.026 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.026 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:42.026 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.026 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.026 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.026 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.026 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.026 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.285 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:42.285 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.285 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.285 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:42.285 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:42.285 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.285 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.285 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.285 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.285 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.285 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.285 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.544 00:18:42.544 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.544 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.544 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.803 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.803 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.803 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.803 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.803 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.803 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.803 { 00:18:42.803 "cntlid": 85, 00:18:42.803 "qid": 0, 00:18:42.803 "state": "enabled", 00:18:42.803 "thread": "nvmf_tgt_poll_group_000", 00:18:42.803 "listen_address": { 00:18:42.803 "trtype": "TCP", 00:18:42.803 "adrfam": "IPv4", 00:18:42.803 "traddr": "10.0.0.2", 00:18:42.803 "trsvcid": "4420" 00:18:42.803 }, 00:18:42.804 "peer_address": { 00:18:42.804 "trtype": "TCP", 00:18:42.804 "adrfam": "IPv4", 00:18:42.804 "traddr": "10.0.0.1", 00:18:42.804 "trsvcid": "59840" 00:18:42.804 }, 00:18:42.804 "auth": { 00:18:42.804 "state": "completed", 00:18:42.804 "digest": "sha384", 00:18:42.804 "dhgroup": "ffdhe6144" 00:18:42.804 } 00:18:42.804 } 00:18:42.804 ]' 00:18:42.804 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.804 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.804 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.804 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:42.804 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.804 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.804 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.804 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.062 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.652 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.944 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.945 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.945 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.945 00:18:44.203 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.203 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.203 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.203 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.203 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.203 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.203 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.203 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.203 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.203 { 00:18:44.203 "cntlid": 87, 00:18:44.203 "qid": 0, 00:18:44.203 "state": "enabled", 00:18:44.203 "thread": "nvmf_tgt_poll_group_000", 00:18:44.203 "listen_address": { 00:18:44.203 "trtype": "TCP", 00:18:44.203 "adrfam": "IPv4", 00:18:44.203 "traddr": "10.0.0.2", 00:18:44.203 "trsvcid": "4420" 00:18:44.203 }, 00:18:44.203 "peer_address": { 00:18:44.203 "trtype": "TCP", 00:18:44.203 "adrfam": "IPv4", 00:18:44.203 "traddr": "10.0.0.1", 00:18:44.203 "trsvcid": "59860" 00:18:44.203 }, 00:18:44.203 "auth": { 00:18:44.203 "state": "completed", 00:18:44.203 "digest": "sha384", 00:18:44.203 "dhgroup": "ffdhe6144" 00:18:44.203 } 00:18:44.203 } 00:18:44.203 ]' 00:18:44.203 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.203 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.203 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.462 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:44.462 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.462 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.462 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.462 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.463 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:18:45.029 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.029 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:45.029 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.029 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.029 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.029 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.029 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.029 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:45.029 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:45.288 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:45.288 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.288 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.288 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:45.288 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:45.288 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.288 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.288 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.288 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.288 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.288 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.288 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.855 00:18:45.855 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.855 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.855 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.855 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.855 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.855 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.855 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.114 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.114 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.114 { 00:18:46.114 "cntlid": 89, 00:18:46.114 "qid": 0, 00:18:46.114 "state": "enabled", 00:18:46.114 "thread": "nvmf_tgt_poll_group_000", 00:18:46.114 "listen_address": { 00:18:46.114 "trtype": "TCP", 00:18:46.114 "adrfam": "IPv4", 00:18:46.114 "traddr": "10.0.0.2", 00:18:46.114 "trsvcid": "4420" 00:18:46.114 }, 00:18:46.114 "peer_address": { 00:18:46.114 "trtype": "TCP", 00:18:46.114 "adrfam": "IPv4", 00:18:46.114 "traddr": "10.0.0.1", 00:18:46.114 "trsvcid": "59874" 00:18:46.114 }, 00:18:46.114 "auth": { 00:18:46.114 "state": "completed", 00:18:46.114 "digest": "sha384", 00:18:46.114 "dhgroup": "ffdhe8192" 00:18:46.114 } 00:18:46.114 } 00:18:46.114 ]' 00:18:46.114 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.114 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.114 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.114 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:46.114 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.114 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.114 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.114 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.373 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.941 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.508 00:18:47.508 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.508 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.508 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.767 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.767 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.767 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.767 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.767 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.767 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.767 { 00:18:47.767 "cntlid": 91, 00:18:47.767 "qid": 0, 00:18:47.767 "state": "enabled", 00:18:47.767 "thread": "nvmf_tgt_poll_group_000", 00:18:47.767 "listen_address": { 00:18:47.767 "trtype": "TCP", 00:18:47.767 "adrfam": "IPv4", 00:18:47.767 "traddr": "10.0.0.2", 00:18:47.767 "trsvcid": "4420" 00:18:47.767 }, 00:18:47.767 "peer_address": { 00:18:47.767 "trtype": "TCP", 00:18:47.767 "adrfam": "IPv4", 00:18:47.767 "traddr": "10.0.0.1", 00:18:47.767 "trsvcid": "59892" 00:18:47.767 }, 00:18:47.767 "auth": { 00:18:47.767 "state": "completed", 00:18:47.767 "digest": "sha384", 00:18:47.767 "dhgroup": "ffdhe8192" 00:18:47.767 } 00:18:47.767 } 00:18:47.767 ]' 00:18:47.767 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.767 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.767 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.767 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:47.767 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.767 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.767 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.767 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.026 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:18:48.593 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.593 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:48.593 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.593 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.593 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.593 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.593 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:48.593 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:48.852 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:48.852 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.852 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.852 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:48.852 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:48.852 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.852 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.852 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.852 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.852 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.852 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.852 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.111 00:18:49.111 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.111 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.111 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.369 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.369 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.369 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.369 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.370 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.370 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.370 { 00:18:49.370 "cntlid": 93, 00:18:49.370 "qid": 0, 00:18:49.370 "state": "enabled", 00:18:49.370 "thread": "nvmf_tgt_poll_group_000", 00:18:49.370 "listen_address": { 00:18:49.370 "trtype": "TCP", 00:18:49.370 "adrfam": "IPv4", 00:18:49.370 "traddr": "10.0.0.2", 00:18:49.370 "trsvcid": "4420" 00:18:49.370 }, 00:18:49.370 "peer_address": { 00:18:49.370 "trtype": "TCP", 00:18:49.370 "adrfam": "IPv4", 00:18:49.370 "traddr": "10.0.0.1", 00:18:49.370 "trsvcid": "59926" 00:18:49.370 }, 00:18:49.370 "auth": { 00:18:49.370 "state": "completed", 00:18:49.370 "digest": "sha384", 00:18:49.370 "dhgroup": "ffdhe8192" 00:18:49.370 } 00:18:49.370 } 00:18:49.370 ]' 00:18:49.370 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.370 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.370 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.370 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:49.370 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.628 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.628 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.628 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.628 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:18:50.194 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.194 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:50.194 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.194 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.194 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.194 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.194 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:50.194 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:50.453 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:50.453 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.453 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.453 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.453 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:50.453 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.453 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:50.453 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.453 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.453 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.453 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.453 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.020 00:18:51.020 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.020 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.020 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.020 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.020 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.020 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.020 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.020 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.020 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.020 { 00:18:51.020 "cntlid": 95, 00:18:51.020 "qid": 0, 00:18:51.020 "state": "enabled", 00:18:51.020 "thread": "nvmf_tgt_poll_group_000", 00:18:51.020 "listen_address": { 00:18:51.020 "trtype": "TCP", 00:18:51.020 "adrfam": "IPv4", 00:18:51.020 "traddr": "10.0.0.2", 00:18:51.020 "trsvcid": "4420" 00:18:51.020 }, 00:18:51.020 "peer_address": { 00:18:51.020 "trtype": "TCP", 00:18:51.020 "adrfam": "IPv4", 00:18:51.020 "traddr": "10.0.0.1", 00:18:51.020 "trsvcid": "59962" 00:18:51.020 }, 00:18:51.020 "auth": { 00:18:51.020 "state": "completed", 00:18:51.020 "digest": "sha384", 00:18:51.020 "dhgroup": "ffdhe8192" 00:18:51.020 } 00:18:51.020 } 00:18:51.020 ]' 00:18:51.020 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.278 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.278 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.278 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.278 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.278 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.279 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.279 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.537 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.104 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.363 00:18:52.363 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.363 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.363 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.622 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.622 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.622 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.622 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.622 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.622 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.622 { 00:18:52.622 "cntlid": 97, 00:18:52.622 "qid": 0, 00:18:52.622 "state": "enabled", 00:18:52.622 "thread": "nvmf_tgt_poll_group_000", 00:18:52.622 "listen_address": { 00:18:52.622 "trtype": "TCP", 00:18:52.622 "adrfam": "IPv4", 00:18:52.622 "traddr": "10.0.0.2", 00:18:52.622 "trsvcid": "4420" 00:18:52.622 }, 00:18:52.622 "peer_address": { 00:18:52.622 "trtype": "TCP", 00:18:52.622 "adrfam": "IPv4", 00:18:52.622 "traddr": "10.0.0.1", 00:18:52.622 "trsvcid": "52474" 00:18:52.622 }, 00:18:52.622 "auth": { 00:18:52.622 "state": "completed", 00:18:52.622 "digest": "sha512", 00:18:52.622 "dhgroup": "null" 00:18:52.622 } 00:18:52.622 } 00:18:52.622 ]' 00:18:52.622 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.622 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.622 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.622 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:52.622 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.622 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.622 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.622 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.880 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:18:53.446 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.446 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:53.446 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.446 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.446 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.446 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.446 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:53.447 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:53.704 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:53.704 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.704 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:53.704 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:53.705 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:53.705 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.705 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.705 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.705 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.705 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.705 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.705 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.963 00:18:53.963 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.963 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.963 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.963 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.963 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.963 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.963 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.963 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.963 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.963 { 00:18:53.963 "cntlid": 99, 00:18:53.963 "qid": 0, 00:18:53.963 "state": "enabled", 00:18:53.963 "thread": "nvmf_tgt_poll_group_000", 00:18:53.963 "listen_address": { 00:18:53.963 "trtype": "TCP", 00:18:53.963 "adrfam": "IPv4", 00:18:53.963 "traddr": "10.0.0.2", 00:18:53.963 "trsvcid": "4420" 00:18:53.963 }, 00:18:53.963 "peer_address": { 00:18:53.963 "trtype": "TCP", 00:18:53.963 "adrfam": "IPv4", 00:18:53.963 "traddr": "10.0.0.1", 00:18:53.963 "trsvcid": "52514" 00:18:53.963 }, 00:18:53.963 "auth": { 00:18:53.963 "state": "completed", 00:18:53.963 "digest": "sha512", 00:18:53.963 "dhgroup": "null" 00:18:53.963 } 00:18:53.963 } 00:18:53.963 ]' 00:18:53.963 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.221 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.221 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.221 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:54.221 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.221 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.221 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.221 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.479 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.046 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.303 00:18:55.303 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.303 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.303 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.562 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.562 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.562 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.562 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.562 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.562 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.562 { 00:18:55.562 "cntlid": 101, 00:18:55.562 "qid": 0, 00:18:55.562 "state": "enabled", 00:18:55.562 "thread": "nvmf_tgt_poll_group_000", 00:18:55.562 "listen_address": { 00:18:55.562 "trtype": "TCP", 00:18:55.562 "adrfam": "IPv4", 00:18:55.562 "traddr": "10.0.0.2", 00:18:55.562 "trsvcid": "4420" 00:18:55.562 }, 00:18:55.562 "peer_address": { 00:18:55.562 "trtype": "TCP", 00:18:55.562 "adrfam": "IPv4", 00:18:55.562 "traddr": "10.0.0.1", 00:18:55.562 "trsvcid": "52530" 00:18:55.562 }, 00:18:55.562 "auth": { 00:18:55.562 "state": "completed", 00:18:55.562 "digest": "sha512", 00:18:55.562 "dhgroup": "null" 00:18:55.562 } 00:18:55.562 } 00:18:55.562 ]' 00:18:55.562 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.562 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.562 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.562 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:55.562 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.562 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.562 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.562 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.826 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:18:56.393 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.393 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:56.393 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.393 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.393 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.394 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.394 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:56.394 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:56.652 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:56.652 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.652 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.652 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:56.652 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:56.652 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.652 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:56.652 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.652 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.652 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.652 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.652 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.910 00:18:56.910 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.910 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.910 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.910 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.910 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.910 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.910 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.910 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.910 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.910 { 00:18:56.910 "cntlid": 103, 00:18:56.910 "qid": 0, 00:18:56.910 "state": "enabled", 00:18:56.910 "thread": "nvmf_tgt_poll_group_000", 00:18:56.910 "listen_address": { 00:18:56.910 "trtype": "TCP", 00:18:56.910 "adrfam": "IPv4", 00:18:56.910 "traddr": "10.0.0.2", 00:18:56.910 "trsvcid": "4420" 00:18:56.910 }, 00:18:56.910 "peer_address": { 00:18:56.910 "trtype": "TCP", 00:18:56.910 "adrfam": "IPv4", 00:18:56.910 "traddr": "10.0.0.1", 00:18:56.910 "trsvcid": "52554" 00:18:56.910 }, 00:18:56.910 "auth": { 00:18:56.910 "state": "completed", 00:18:56.910 "digest": "sha512", 00:18:56.910 "dhgroup": "null" 00:18:56.910 } 00:18:56.910 } 00:18:56.910 ]' 00:18:56.910 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.168 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.168 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.168 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:57.168 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.168 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.168 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.168 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.426 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.992 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.250 00:18:58.250 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.250 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.250 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.508 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.508 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.508 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.508 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.508 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.508 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.508 { 00:18:58.508 "cntlid": 105, 00:18:58.508 "qid": 0, 00:18:58.508 "state": "enabled", 00:18:58.508 "thread": "nvmf_tgt_poll_group_000", 00:18:58.508 "listen_address": { 00:18:58.508 "trtype": "TCP", 00:18:58.508 "adrfam": "IPv4", 00:18:58.508 "traddr": "10.0.0.2", 00:18:58.508 "trsvcid": "4420" 00:18:58.508 }, 00:18:58.508 "peer_address": { 00:18:58.508 "trtype": "TCP", 00:18:58.508 "adrfam": "IPv4", 00:18:58.508 "traddr": "10.0.0.1", 00:18:58.508 "trsvcid": "52584" 00:18:58.508 }, 00:18:58.508 "auth": { 00:18:58.508 "state": "completed", 00:18:58.508 "digest": "sha512", 00:18:58.508 "dhgroup": "ffdhe2048" 00:18:58.508 } 00:18:58.508 } 00:18:58.508 ]' 00:18:58.508 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.508 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.508 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.508 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:58.508 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.508 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.508 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.508 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.766 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:18:59.332 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.333 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:59.333 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.333 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.333 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.333 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.333 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:59.333 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:59.591 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:59.591 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.591 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:59.591 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:59.591 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:59.591 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.591 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.591 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.591 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.591 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.591 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.591 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.591 00:18:59.591 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.591 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.591 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.867 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.867 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.867 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.867 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.868 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.868 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.868 { 00:18:59.868 "cntlid": 107, 00:18:59.868 "qid": 0, 00:18:59.868 "state": "enabled", 00:18:59.868 "thread": "nvmf_tgt_poll_group_000", 00:18:59.868 "listen_address": { 00:18:59.868 "trtype": "TCP", 00:18:59.868 "adrfam": "IPv4", 00:18:59.868 "traddr": "10.0.0.2", 00:18:59.868 "trsvcid": "4420" 00:18:59.868 }, 00:18:59.868 "peer_address": { 00:18:59.868 "trtype": "TCP", 00:18:59.868 "adrfam": "IPv4", 00:18:59.868 "traddr": "10.0.0.1", 00:18:59.868 "trsvcid": "52610" 00:18:59.868 }, 00:18:59.868 "auth": { 00:18:59.868 "state": "completed", 00:18:59.868 "digest": "sha512", 00:18:59.868 "dhgroup": "ffdhe2048" 00:18:59.868 } 00:18:59.868 } 00:18:59.868 ]' 00:18:59.868 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.868 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.868 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.868 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:59.868 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.127 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.127 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.127 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.127 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:19:00.692 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.692 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:00.692 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.692 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.692 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.692 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.692 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:00.692 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:00.949 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:00.950 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.950 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.950 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:00.950 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.950 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.950 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.950 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.950 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.950 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.950 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.950 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.208 00:19:01.208 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.208 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.208 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.208 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.208 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.208 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.208 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.208 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.466 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.466 { 00:19:01.466 "cntlid": 109, 00:19:01.466 "qid": 0, 00:19:01.466 "state": "enabled", 00:19:01.466 "thread": "nvmf_tgt_poll_group_000", 00:19:01.466 "listen_address": { 00:19:01.466 "trtype": "TCP", 00:19:01.466 "adrfam": "IPv4", 00:19:01.466 "traddr": "10.0.0.2", 00:19:01.466 "trsvcid": "4420" 00:19:01.466 }, 00:19:01.466 "peer_address": { 00:19:01.466 "trtype": "TCP", 00:19:01.466 "adrfam": "IPv4", 00:19:01.466 "traddr": "10.0.0.1", 00:19:01.466 "trsvcid": "52652" 00:19:01.466 }, 00:19:01.466 "auth": { 00:19:01.466 "state": "completed", 00:19:01.466 "digest": "sha512", 00:19:01.466 "dhgroup": "ffdhe2048" 00:19:01.466 } 00:19:01.466 } 00:19:01.466 ]' 00:19:01.466 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.466 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.466 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.466 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:01.466 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.466 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.466 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.466 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.725 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:19:02.290 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.290 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:02.290 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.290 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.290 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.290 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.290 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:02.290 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:02.290 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:02.290 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.290 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.290 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:02.290 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:02.290 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.290 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:02.291 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.291 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.291 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.291 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.291 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.548 00:19:02.548 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.548 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.548 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.813 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.813 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.813 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.813 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.813 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.813 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.813 { 00:19:02.813 "cntlid": 111, 00:19:02.813 "qid": 0, 00:19:02.813 "state": "enabled", 00:19:02.813 "thread": "nvmf_tgt_poll_group_000", 00:19:02.813 "listen_address": { 00:19:02.813 "trtype": "TCP", 00:19:02.813 "adrfam": "IPv4", 00:19:02.813 "traddr": "10.0.0.2", 00:19:02.813 "trsvcid": "4420" 00:19:02.813 }, 00:19:02.813 "peer_address": { 00:19:02.813 "trtype": "TCP", 00:19:02.813 "adrfam": "IPv4", 00:19:02.813 "traddr": "10.0.0.1", 00:19:02.813 "trsvcid": "55574" 00:19:02.813 }, 00:19:02.813 "auth": { 00:19:02.813 "state": "completed", 00:19:02.813 "digest": "sha512", 00:19:02.813 "dhgroup": "ffdhe2048" 00:19:02.813 } 00:19:02.813 } 00:19:02.813 ]' 00:19:02.813 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.813 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.813 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.813 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.813 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.813 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.813 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.813 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.071 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:19:03.638 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.638 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:03.638 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.638 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.638 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.638 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.638 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.638 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:03.638 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:03.896 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:03.896 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.896 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.896 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:03.896 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:03.896 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.896 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.896 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.896 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.896 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.896 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.896 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.154 00:19:04.154 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.154 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.154 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.154 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.154 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.154 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.154 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.154 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.154 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.154 { 00:19:04.154 "cntlid": 113, 00:19:04.154 "qid": 0, 00:19:04.154 "state": "enabled", 00:19:04.154 "thread": "nvmf_tgt_poll_group_000", 00:19:04.154 "listen_address": { 00:19:04.154 "trtype": "TCP", 00:19:04.154 "adrfam": "IPv4", 00:19:04.154 "traddr": "10.0.0.2", 00:19:04.154 "trsvcid": "4420" 00:19:04.154 }, 00:19:04.154 "peer_address": { 00:19:04.154 "trtype": "TCP", 00:19:04.154 "adrfam": "IPv4", 00:19:04.154 "traddr": "10.0.0.1", 00:19:04.154 "trsvcid": "55600" 00:19:04.154 }, 00:19:04.154 "auth": { 00:19:04.154 "state": "completed", 00:19:04.154 "digest": "sha512", 00:19:04.154 "dhgroup": "ffdhe3072" 00:19:04.154 } 00:19:04.154 } 00:19:04.154 ]' 00:19:04.154 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.412 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.412 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.412 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:04.412 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.412 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.412 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.412 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.412 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:19:04.978 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.978 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:04.978 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.978 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.978 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.978 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.978 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:04.978 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:05.237 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:05.237 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.237 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.237 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:05.237 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:05.237 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.237 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.237 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.237 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.237 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.237 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.237 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.495 00:19:05.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.755 { 00:19:05.755 "cntlid": 115, 00:19:05.755 "qid": 0, 00:19:05.755 "state": "enabled", 00:19:05.755 "thread": "nvmf_tgt_poll_group_000", 00:19:05.755 "listen_address": { 00:19:05.755 "trtype": "TCP", 00:19:05.755 "adrfam": "IPv4", 00:19:05.755 "traddr": "10.0.0.2", 00:19:05.755 "trsvcid": "4420" 00:19:05.755 }, 00:19:05.755 "peer_address": { 00:19:05.755 "trtype": "TCP", 00:19:05.755 "adrfam": "IPv4", 00:19:05.755 "traddr": "10.0.0.1", 00:19:05.755 "trsvcid": "55628" 00:19:05.755 }, 00:19:05.755 "auth": { 00:19:05.755 "state": "completed", 00:19:05.755 "digest": "sha512", 00:19:05.755 "dhgroup": "ffdhe3072" 00:19:05.755 } 00:19:05.755 } 00:19:05.755 ]' 00:19:05.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:05.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.013 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:19:06.579 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.579 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:06.579 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.579 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.579 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.579 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.579 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:06.579 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:06.837 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:06.838 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.838 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.838 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:06.838 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:06.838 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.838 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.838 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.838 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.838 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.838 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.838 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.838 00:19:07.096 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.096 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.096 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.096 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.096 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.096 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.096 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.096 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.096 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.096 { 00:19:07.096 "cntlid": 117, 00:19:07.096 "qid": 0, 00:19:07.096 "state": "enabled", 00:19:07.096 "thread": "nvmf_tgt_poll_group_000", 00:19:07.096 "listen_address": { 00:19:07.096 "trtype": "TCP", 00:19:07.096 "adrfam": "IPv4", 00:19:07.096 "traddr": "10.0.0.2", 00:19:07.096 "trsvcid": "4420" 00:19:07.096 }, 00:19:07.096 "peer_address": { 00:19:07.096 "trtype": "TCP", 00:19:07.096 "adrfam": "IPv4", 00:19:07.096 "traddr": "10.0.0.1", 00:19:07.096 "trsvcid": "55664" 00:19:07.096 }, 00:19:07.096 "auth": { 00:19:07.096 "state": "completed", 00:19:07.096 "digest": "sha512", 00:19:07.096 "dhgroup": "ffdhe3072" 00:19:07.096 } 00:19:07.096 } 00:19:07.096 ]' 00:19:07.096 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.096 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.096 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.355 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:07.355 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.355 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.355 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.355 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.355 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:19:07.925 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.925 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:07.925 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.925 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.925 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.925 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.925 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:07.925 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.183 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:08.183 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.183 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.183 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:08.183 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:08.183 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.183 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:08.183 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.183 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.183 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.183 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.183 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.442 00:19:08.442 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.442 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.442 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.700 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.700 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.700 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.700 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.700 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.700 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.700 { 00:19:08.700 "cntlid": 119, 00:19:08.700 "qid": 0, 00:19:08.700 "state": "enabled", 00:19:08.700 "thread": "nvmf_tgt_poll_group_000", 00:19:08.700 "listen_address": { 00:19:08.700 "trtype": "TCP", 00:19:08.700 "adrfam": "IPv4", 00:19:08.700 "traddr": "10.0.0.2", 00:19:08.700 "trsvcid": "4420" 00:19:08.700 }, 00:19:08.700 "peer_address": { 00:19:08.700 "trtype": "TCP", 00:19:08.700 "adrfam": "IPv4", 00:19:08.700 "traddr": "10.0.0.1", 00:19:08.700 "trsvcid": "55690" 00:19:08.700 }, 00:19:08.700 "auth": { 00:19:08.700 "state": "completed", 00:19:08.700 "digest": "sha512", 00:19:08.700 "dhgroup": "ffdhe3072" 00:19:08.700 } 00:19:08.700 } 00:19:08.700 ]' 00:19:08.700 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.700 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.700 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.700 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:08.700 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.700 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.700 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.700 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.959 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.526 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.784 00:19:10.044 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.044 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.044 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.044 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.044 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.044 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.044 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.044 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.044 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.044 { 00:19:10.044 "cntlid": 121, 00:19:10.044 "qid": 0, 00:19:10.044 "state": "enabled", 00:19:10.044 "thread": "nvmf_tgt_poll_group_000", 00:19:10.044 "listen_address": { 00:19:10.044 "trtype": "TCP", 00:19:10.044 "adrfam": "IPv4", 00:19:10.044 "traddr": "10.0.0.2", 00:19:10.044 "trsvcid": "4420" 00:19:10.044 }, 00:19:10.044 "peer_address": { 00:19:10.044 "trtype": "TCP", 00:19:10.044 "adrfam": "IPv4", 00:19:10.044 "traddr": "10.0.0.1", 00:19:10.044 "trsvcid": "55718" 00:19:10.044 }, 00:19:10.044 "auth": { 00:19:10.044 "state": "completed", 00:19:10.044 "digest": "sha512", 00:19:10.044 "dhgroup": "ffdhe4096" 00:19:10.044 } 00:19:10.044 } 00:19:10.044 ]' 00:19:10.044 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.044 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.044 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.301 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.301 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.301 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.301 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.301 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.301 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:19:10.867 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.867 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:10.867 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.867 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.867 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.867 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.867 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:10.867 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:11.186 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:11.186 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.186 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:11.186 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:11.186 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:11.186 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.186 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.186 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.186 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.186 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.186 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.186 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.444 00:19:11.444 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.444 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.444 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.703 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.703 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.703 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.703 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.703 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.703 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.703 { 00:19:11.703 "cntlid": 123, 00:19:11.703 "qid": 0, 00:19:11.703 "state": "enabled", 00:19:11.703 "thread": "nvmf_tgt_poll_group_000", 00:19:11.703 "listen_address": { 00:19:11.703 "trtype": "TCP", 00:19:11.703 "adrfam": "IPv4", 00:19:11.703 "traddr": "10.0.0.2", 00:19:11.703 "trsvcid": "4420" 00:19:11.703 }, 00:19:11.703 "peer_address": { 00:19:11.703 "trtype": "TCP", 00:19:11.703 "adrfam": "IPv4", 00:19:11.703 "traddr": "10.0.0.1", 00:19:11.703 "trsvcid": "55756" 00:19:11.703 }, 00:19:11.703 "auth": { 00:19:11.703 "state": "completed", 00:19:11.703 "digest": "sha512", 00:19:11.703 "dhgroup": "ffdhe4096" 00:19:11.703 } 00:19:11.703 } 00:19:11.703 ]' 00:19:11.703 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.703 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.703 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.703 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:11.703 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.703 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.703 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.703 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.962 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:19:12.527 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.527 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.528 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.786 00:19:12.786 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.786 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.786 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.043 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.043 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.043 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.043 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.043 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.043 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.043 { 00:19:13.043 "cntlid": 125, 00:19:13.043 "qid": 0, 00:19:13.043 "state": "enabled", 00:19:13.043 "thread": "nvmf_tgt_poll_group_000", 00:19:13.043 "listen_address": { 00:19:13.043 "trtype": "TCP", 00:19:13.043 "adrfam": "IPv4", 00:19:13.043 "traddr": "10.0.0.2", 00:19:13.043 "trsvcid": "4420" 00:19:13.043 }, 00:19:13.043 "peer_address": { 00:19:13.043 "trtype": "TCP", 00:19:13.043 "adrfam": "IPv4", 00:19:13.043 "traddr": "10.0.0.1", 00:19:13.043 "trsvcid": "36650" 00:19:13.043 }, 00:19:13.043 "auth": { 00:19:13.043 "state": "completed", 00:19:13.043 "digest": "sha512", 00:19:13.043 "dhgroup": "ffdhe4096" 00:19:13.043 } 00:19:13.043 } 00:19:13.043 ]' 00:19:13.043 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.043 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.043 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.043 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:13.043 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.301 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.301 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.301 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.301 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:19:13.867 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.867 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:13.867 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.867 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.867 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.867 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.867 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:13.867 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:14.126 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:14.126 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.126 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.126 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:14.126 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:14.126 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.126 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:14.126 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.126 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.126 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.126 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.126 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.383 00:19:14.383 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.383 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.383 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.640 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.640 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.640 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.640 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.640 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.640 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.640 { 00:19:14.640 "cntlid": 127, 00:19:14.640 "qid": 0, 00:19:14.640 "state": "enabled", 00:19:14.640 "thread": "nvmf_tgt_poll_group_000", 00:19:14.640 "listen_address": { 00:19:14.640 "trtype": "TCP", 00:19:14.640 "adrfam": "IPv4", 00:19:14.640 "traddr": "10.0.0.2", 00:19:14.640 "trsvcid": "4420" 00:19:14.640 }, 00:19:14.640 "peer_address": { 00:19:14.640 "trtype": "TCP", 00:19:14.640 "adrfam": "IPv4", 00:19:14.640 "traddr": "10.0.0.1", 00:19:14.640 "trsvcid": "36694" 00:19:14.640 }, 00:19:14.640 "auth": { 00:19:14.640 "state": "completed", 00:19:14.640 "digest": "sha512", 00:19:14.640 "dhgroup": "ffdhe4096" 00:19:14.640 } 00:19:14.640 } 00:19:14.640 ]' 00:19:14.640 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.640 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.640 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.640 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:14.640 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.640 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.640 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.640 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.898 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:19:15.464 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.464 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:15.464 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.464 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.464 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.464 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.464 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.464 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:15.464 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:15.464 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:15.464 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.464 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.464 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:15.464 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:15.464 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.464 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.464 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.464 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.464 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.464 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.464 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.030 00:19:16.030 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.030 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.030 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.030 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.030 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.030 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.030 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.030 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.030 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.030 { 00:19:16.030 "cntlid": 129, 00:19:16.030 "qid": 0, 00:19:16.030 "state": "enabled", 00:19:16.030 "thread": "nvmf_tgt_poll_group_000", 00:19:16.030 "listen_address": { 00:19:16.030 "trtype": "TCP", 00:19:16.030 "adrfam": "IPv4", 00:19:16.030 "traddr": "10.0.0.2", 00:19:16.030 "trsvcid": "4420" 00:19:16.030 }, 00:19:16.030 "peer_address": { 00:19:16.030 "trtype": "TCP", 00:19:16.030 "adrfam": "IPv4", 00:19:16.030 "traddr": "10.0.0.1", 00:19:16.030 "trsvcid": "36728" 00:19:16.030 }, 00:19:16.030 "auth": { 00:19:16.030 "state": "completed", 00:19:16.030 "digest": "sha512", 00:19:16.030 "dhgroup": "ffdhe6144" 00:19:16.030 } 00:19:16.030 } 00:19:16.030 ]' 00:19:16.030 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.030 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.030 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.287 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:16.287 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.287 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.287 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.287 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.288 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:19:16.855 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.855 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:16.855 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.855 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.855 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.855 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.855 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:16.855 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:17.114 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:17.114 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.114 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:17.114 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:17.114 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:17.114 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.114 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.114 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.114 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.114 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.114 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.114 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.378 00:19:17.378 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.378 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.378 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.637 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.637 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.637 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.637 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.637 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.637 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.637 { 00:19:17.637 "cntlid": 131, 00:19:17.637 "qid": 0, 00:19:17.637 "state": "enabled", 00:19:17.637 "thread": "nvmf_tgt_poll_group_000", 00:19:17.637 "listen_address": { 00:19:17.637 "trtype": "TCP", 00:19:17.637 "adrfam": "IPv4", 00:19:17.637 "traddr": "10.0.0.2", 00:19:17.637 "trsvcid": "4420" 00:19:17.637 }, 00:19:17.637 "peer_address": { 00:19:17.637 "trtype": "TCP", 00:19:17.637 "adrfam": "IPv4", 00:19:17.637 "traddr": "10.0.0.1", 00:19:17.637 "trsvcid": "36750" 00:19:17.637 }, 00:19:17.637 "auth": { 00:19:17.637 "state": "completed", 00:19:17.637 "digest": "sha512", 00:19:17.637 "dhgroup": "ffdhe6144" 00:19:17.637 } 00:19:17.637 } 00:19:17.637 ]' 00:19:17.637 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.637 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.637 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.637 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.637 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.637 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.637 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.637 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.896 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:19:18.464 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.464 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:18.464 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.464 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.464 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.464 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.464 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:18.464 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:18.724 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:18.724 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.724 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.724 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:18.724 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:18.724 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.724 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.724 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.724 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.724 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.724 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.724 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.984 00:19:18.984 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.984 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.984 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.270 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.270 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.270 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.270 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.270 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.270 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.270 { 00:19:19.270 "cntlid": 133, 00:19:19.270 "qid": 0, 00:19:19.270 "state": "enabled", 00:19:19.270 "thread": "nvmf_tgt_poll_group_000", 00:19:19.270 "listen_address": { 00:19:19.270 "trtype": "TCP", 00:19:19.270 "adrfam": "IPv4", 00:19:19.270 "traddr": "10.0.0.2", 00:19:19.270 "trsvcid": "4420" 00:19:19.270 }, 00:19:19.270 "peer_address": { 00:19:19.270 "trtype": "TCP", 00:19:19.270 "adrfam": "IPv4", 00:19:19.270 "traddr": "10.0.0.1", 00:19:19.270 "trsvcid": "36770" 00:19:19.270 }, 00:19:19.270 "auth": { 00:19:19.270 "state": "completed", 00:19:19.270 "digest": "sha512", 00:19:19.270 "dhgroup": "ffdhe6144" 00:19:19.270 } 00:19:19.270 } 00:19:19.270 ]' 00:19:19.271 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.271 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.271 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.271 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.271 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.271 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.271 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.271 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.530 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.098 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.667 00:19:20.667 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.667 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.667 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.667 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.667 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.667 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.667 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.667 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.667 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.667 { 00:19:20.667 "cntlid": 135, 00:19:20.667 "qid": 0, 00:19:20.667 "state": "enabled", 00:19:20.667 "thread": "nvmf_tgt_poll_group_000", 00:19:20.667 "listen_address": { 00:19:20.667 "trtype": "TCP", 00:19:20.667 "adrfam": "IPv4", 00:19:20.667 "traddr": "10.0.0.2", 00:19:20.667 "trsvcid": "4420" 00:19:20.667 }, 00:19:20.667 "peer_address": { 00:19:20.667 "trtype": "TCP", 00:19:20.667 "adrfam": "IPv4", 00:19:20.667 "traddr": "10.0.0.1", 00:19:20.667 "trsvcid": "36790" 00:19:20.667 }, 00:19:20.667 "auth": { 00:19:20.667 "state": "completed", 00:19:20.667 "digest": "sha512", 00:19:20.667 "dhgroup": "ffdhe6144" 00:19:20.667 } 00:19:20.667 } 00:19:20.667 ]' 00:19:20.667 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.667 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.667 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.927 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:20.927 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.927 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.927 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.927 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.927 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:19:21.495 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.495 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:21.495 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.495 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.495 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.495 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.495 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.495 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:21.495 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:21.754 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:21.754 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.754 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.754 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:21.754 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:21.754 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.754 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.754 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.754 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.754 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.754 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.754 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.323 00:19:22.323 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.323 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.323 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.323 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.323 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.323 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.323 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.323 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.323 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.323 { 00:19:22.323 "cntlid": 137, 00:19:22.323 "qid": 0, 00:19:22.323 "state": "enabled", 00:19:22.323 "thread": "nvmf_tgt_poll_group_000", 00:19:22.323 "listen_address": { 00:19:22.323 "trtype": "TCP", 00:19:22.323 "adrfam": "IPv4", 00:19:22.323 "traddr": "10.0.0.2", 00:19:22.323 "trsvcid": "4420" 00:19:22.323 }, 00:19:22.323 "peer_address": { 00:19:22.323 "trtype": "TCP", 00:19:22.323 "adrfam": "IPv4", 00:19:22.323 "traddr": "10.0.0.1", 00:19:22.323 "trsvcid": "36832" 00:19:22.323 }, 00:19:22.323 "auth": { 00:19:22.323 "state": "completed", 00:19:22.323 "digest": "sha512", 00:19:22.323 "dhgroup": "ffdhe8192" 00:19:22.323 } 00:19:22.323 } 00:19:22.323 ]' 00:19:22.323 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.323 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.323 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.583 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.583 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.583 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.583 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.583 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.583 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:19:23.152 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.152 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:23.152 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.152 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.152 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.152 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.152 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.152 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.412 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:23.412 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.412 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.412 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:23.412 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:23.412 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.412 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.412 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.412 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.412 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.412 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.412 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.981 00:19:23.981 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.981 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.981 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.981 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.981 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.981 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.981 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.981 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.981 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.981 { 00:19:23.981 "cntlid": 139, 00:19:23.981 "qid": 0, 00:19:23.981 "state": "enabled", 00:19:23.981 "thread": "nvmf_tgt_poll_group_000", 00:19:23.981 "listen_address": { 00:19:23.981 "trtype": "TCP", 00:19:23.981 "adrfam": "IPv4", 00:19:23.981 "traddr": "10.0.0.2", 00:19:23.981 "trsvcid": "4420" 00:19:23.981 }, 00:19:23.981 "peer_address": { 00:19:23.981 "trtype": "TCP", 00:19:23.981 "adrfam": "IPv4", 00:19:23.981 "traddr": "10.0.0.1", 00:19:23.981 "trsvcid": "44062" 00:19:23.981 }, 00:19:23.981 "auth": { 00:19:23.981 "state": "completed", 00:19:23.981 "digest": "sha512", 00:19:23.981 "dhgroup": "ffdhe8192" 00:19:23.981 } 00:19:23.981 } 00:19:23.981 ]' 00:19:23.981 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.981 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.981 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.240 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.240 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.240 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.240 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.240 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.240 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:OWUyY2FiMzExODlhOWM0ZWI5ZTljMzM5MzZkNjRhZDM3gWot: --dhchap-ctrl-secret DHHC-1:02:Y2NhNTE5OWQ1MGMxNjhhN2M3Yjg5NzRjMjBmOTk4YzcxZTI5MjljM2YzNmE2YTRmnboMig==: 00:19:24.810 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.810 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:24.810 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.810 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.810 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.810 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.810 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:24.810 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.070 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:25.070 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.070 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.070 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:25.070 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:25.070 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.070 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.070 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.070 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.070 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.070 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.070 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.638 00:19:25.638 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.638 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.638 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.638 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.638 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.638 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.638 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.638 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.638 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.638 { 00:19:25.638 "cntlid": 141, 00:19:25.638 "qid": 0, 00:19:25.638 "state": "enabled", 00:19:25.638 "thread": "nvmf_tgt_poll_group_000", 00:19:25.638 "listen_address": { 00:19:25.639 "trtype": "TCP", 00:19:25.639 "adrfam": "IPv4", 00:19:25.639 "traddr": "10.0.0.2", 00:19:25.639 "trsvcid": "4420" 00:19:25.639 }, 00:19:25.639 "peer_address": { 00:19:25.639 "trtype": "TCP", 00:19:25.639 "adrfam": "IPv4", 00:19:25.639 "traddr": "10.0.0.1", 00:19:25.639 "trsvcid": "44094" 00:19:25.639 }, 00:19:25.639 "auth": { 00:19:25.639 "state": "completed", 00:19:25.639 "digest": "sha512", 00:19:25.639 "dhgroup": "ffdhe8192" 00:19:25.639 } 00:19:25.639 } 00:19:25.639 ]' 00:19:25.639 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.898 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.898 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.898 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:25.898 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.898 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.898 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.898 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.898 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YjUzN2VkMzY3ZWRmOWJjYTFjMzIzNmViZDI5NjgxOWUyZjE4OTc2MDY2YzVlYzEwvltVuQ==: --dhchap-ctrl-secret DHHC-1:01:OWZjMTQ4OTVlMmJiYmRjYWE4YTA0MDA2YzVlOGIzZGQ7k6YU: 00:19:26.467 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.467 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:26.467 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.467 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.467 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.467 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.467 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:26.467 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:26.726 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:26.726 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.726 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.726 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:26.726 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:26.726 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.726 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:26.726 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.726 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.726 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.726 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.727 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.295 00:19:27.295 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.295 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.295 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.295 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.295 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.295 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.295 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.295 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.295 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.295 { 00:19:27.295 "cntlid": 143, 00:19:27.295 "qid": 0, 00:19:27.295 "state": "enabled", 00:19:27.295 "thread": "nvmf_tgt_poll_group_000", 00:19:27.295 "listen_address": { 00:19:27.295 "trtype": "TCP", 00:19:27.295 "adrfam": "IPv4", 00:19:27.295 "traddr": "10.0.0.2", 00:19:27.295 "trsvcid": "4420" 00:19:27.295 }, 00:19:27.295 "peer_address": { 00:19:27.295 "trtype": "TCP", 00:19:27.295 "adrfam": "IPv4", 00:19:27.295 "traddr": "10.0.0.1", 00:19:27.295 "trsvcid": "44128" 00:19:27.295 }, 00:19:27.295 "auth": { 00:19:27.295 "state": "completed", 00:19:27.295 "digest": "sha512", 00:19:27.295 "dhgroup": "ffdhe8192" 00:19:27.295 } 00:19:27.295 } 00:19:27.295 ]' 00:19:27.295 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.554 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.554 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.554 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:27.554 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.554 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.554 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.554 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.813 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:19:28.072 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.331 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.899 00:19:28.899 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.899 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.899 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.159 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.159 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.159 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.159 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.159 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.159 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.159 { 00:19:29.159 "cntlid": 145, 00:19:29.159 "qid": 0, 00:19:29.159 "state": "enabled", 00:19:29.159 "thread": "nvmf_tgt_poll_group_000", 00:19:29.159 "listen_address": { 00:19:29.159 "trtype": "TCP", 00:19:29.159 "adrfam": "IPv4", 00:19:29.159 "traddr": "10.0.0.2", 00:19:29.159 "trsvcid": "4420" 00:19:29.159 }, 00:19:29.159 "peer_address": { 00:19:29.159 "trtype": "TCP", 00:19:29.159 "adrfam": "IPv4", 00:19:29.159 "traddr": "10.0.0.1", 00:19:29.159 "trsvcid": "44168" 00:19:29.159 }, 00:19:29.159 "auth": { 00:19:29.159 "state": "completed", 00:19:29.159 "digest": "sha512", 00:19:29.159 "dhgroup": "ffdhe8192" 00:19:29.159 } 00:19:29.159 } 00:19:29.159 ]' 00:19:29.159 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.159 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.159 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.159 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.159 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.159 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.159 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.159 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.419 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGJlZmNiNDJjNDg1ZDY5ODBlNjBhMWIyMGE0NjgyYWFmODk4ZTk5N2E3M2MzODkx+W8lWw==: --dhchap-ctrl-secret DHHC-1:03:ZDNhM2IwZmVjYzFiOWI3OTllZjFmOTllYjYwOWEzMGNkOTQ5ZTMyZmQ4NTVjNDU5NDJjODZkNTQ1MjdjYjAwMVJnAIU=: 00:19:29.989 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.989 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:29.989 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.989 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.989 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.989 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:19:29.989 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.990 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.990 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.990 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:29.990 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:29.990 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:29.990 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:29.990 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.990 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:29.990 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.990 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:29.990 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:30.248 request: 00:19:30.248 { 00:19:30.248 "name": "nvme0", 00:19:30.248 "trtype": "tcp", 00:19:30.248 "traddr": "10.0.0.2", 00:19:30.248 "adrfam": "ipv4", 00:19:30.248 "trsvcid": "4420", 00:19:30.248 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:30.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:30.248 "prchk_reftag": false, 00:19:30.248 "prchk_guard": false, 00:19:30.248 "hdgst": false, 00:19:30.248 "ddgst": false, 00:19:30.248 "dhchap_key": "key2", 00:19:30.248 "method": "bdev_nvme_attach_controller", 00:19:30.248 "req_id": 1 00:19:30.248 } 00:19:30.248 Got JSON-RPC error response 00:19:30.248 response: 00:19:30.248 { 00:19:30.248 "code": -5, 00:19:30.248 "message": "Input/output error" 00:19:30.248 } 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:30.248 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:30.818 request: 00:19:30.818 { 00:19:30.818 "name": "nvme0", 00:19:30.818 "trtype": "tcp", 00:19:30.818 "traddr": "10.0.0.2", 00:19:30.818 "adrfam": "ipv4", 00:19:30.818 "trsvcid": "4420", 00:19:30.818 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:30.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:30.818 "prchk_reftag": false, 00:19:30.818 "prchk_guard": false, 00:19:30.818 "hdgst": false, 00:19:30.818 "ddgst": false, 00:19:30.818 "dhchap_key": "key1", 00:19:30.818 "dhchap_ctrlr_key": "ckey2", 00:19:30.818 "method": "bdev_nvme_attach_controller", 00:19:30.818 "req_id": 1 00:19:30.818 } 00:19:30.818 Got JSON-RPC error response 00:19:30.818 response: 00:19:30.818 { 00:19:30.818 "code": -5, 00:19:30.818 "message": "Input/output error" 00:19:30.818 } 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.818 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.077 request: 00:19:31.077 { 00:19:31.077 "name": "nvme0", 00:19:31.077 "trtype": "tcp", 00:19:31.077 "traddr": "10.0.0.2", 00:19:31.077 "adrfam": "ipv4", 00:19:31.077 "trsvcid": "4420", 00:19:31.077 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:31.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:31.077 "prchk_reftag": false, 00:19:31.077 "prchk_guard": false, 00:19:31.077 "hdgst": false, 00:19:31.077 "ddgst": false, 00:19:31.077 "dhchap_key": "key1", 00:19:31.077 "dhchap_ctrlr_key": "ckey1", 00:19:31.077 "method": "bdev_nvme_attach_controller", 00:19:31.077 "req_id": 1 00:19:31.077 } 00:19:31.077 Got JSON-RPC error response 00:19:31.077 response: 00:19:31.077 { 00:19:31.077 "code": -5, 00:19:31.077 "message": "Input/output error" 00:19:31.077 } 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3885820 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3885820 ']' 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3885820 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3885820 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3885820' 00:19:31.336 killing process with pid 3885820 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3885820 00:19:31.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3885820 00:19:31.336 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:31.337 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:31.601 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:31.601 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.601 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3906855 00:19:31.601 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3906855 00:19:31.601 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:31.601 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3906855 ']' 00:19:31.601 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.601 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:31.601 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.602 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:31.602 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.242 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.242 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:32.242 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:32.242 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.242 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.242 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.242 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:32.242 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3906855 00:19:32.242 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3906855 ']' 00:19:32.242 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.242 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.242 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.242 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.242 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.500 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.500 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:32.500 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:32.500 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.500 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.758 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.758 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:32.758 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.758 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.758 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:32.758 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:32.758 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.758 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:32.758 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.758 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.758 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.758 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.758 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.017 00:19:33.017 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.017 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.017 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.275 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.275 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.275 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.275 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.276 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.276 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.276 { 00:19:33.276 "cntlid": 1, 00:19:33.276 "qid": 0, 00:19:33.276 "state": "enabled", 00:19:33.276 "thread": "nvmf_tgt_poll_group_000", 00:19:33.276 "listen_address": { 00:19:33.276 "trtype": "TCP", 00:19:33.276 "adrfam": "IPv4", 00:19:33.276 "traddr": "10.0.0.2", 00:19:33.276 "trsvcid": "4420" 00:19:33.276 }, 00:19:33.276 "peer_address": { 00:19:33.276 "trtype": "TCP", 00:19:33.276 "adrfam": "IPv4", 00:19:33.276 "traddr": "10.0.0.1", 00:19:33.276 "trsvcid": "36006" 00:19:33.276 }, 00:19:33.276 "auth": { 00:19:33.276 "state": "completed", 00:19:33.276 "digest": "sha512", 00:19:33.276 "dhgroup": "ffdhe8192" 00:19:33.276 } 00:19:33.276 } 00:19:33.276 ]' 00:19:33.276 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.276 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.276 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.535 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:33.535 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.535 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.535 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.535 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.535 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YzYzNjlkZTVmYTFlM2EzNWJjYjE1NGFiNDMwOTY0OGIyYzRhZGUwNDdjNGE5N2IwNjg4ZDNlZjFkOGU2OWE1OEKAmeI=: 00:19:34.102 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.102 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:34.102 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.102 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.102 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.102 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:34.102 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.102 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.102 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.102 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:34.102 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:34.361 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.361 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:34.361 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.361 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:34.361 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.361 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:34.361 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.361 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.361 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.620 request: 00:19:34.620 { 00:19:34.620 "name": "nvme0", 00:19:34.620 "trtype": "tcp", 00:19:34.620 "traddr": "10.0.0.2", 00:19:34.620 "adrfam": "ipv4", 00:19:34.620 "trsvcid": "4420", 00:19:34.620 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:34.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:34.620 "prchk_reftag": false, 00:19:34.620 "prchk_guard": false, 00:19:34.620 "hdgst": false, 00:19:34.620 "ddgst": false, 00:19:34.620 "dhchap_key": "key3", 00:19:34.620 "method": "bdev_nvme_attach_controller", 00:19:34.620 "req_id": 1 00:19:34.620 } 00:19:34.620 Got JSON-RPC error response 00:19:34.620 response: 00:19:34.620 { 00:19:34.620 "code": -5, 00:19:34.620 "message": "Input/output error" 00:19:34.620 } 00:19:34.620 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:34.620 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:34.620 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:34.620 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:34.620 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:34.620 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:34.620 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:34.620 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.879 request: 00:19:34.879 { 00:19:34.879 "name": "nvme0", 00:19:34.879 "trtype": "tcp", 00:19:34.879 "traddr": "10.0.0.2", 00:19:34.879 "adrfam": "ipv4", 00:19:34.879 "trsvcid": "4420", 00:19:34.879 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:34.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:34.879 "prchk_reftag": false, 00:19:34.879 "prchk_guard": false, 00:19:34.879 "hdgst": false, 00:19:34.879 "ddgst": false, 00:19:34.879 "dhchap_key": "key3", 00:19:34.879 "method": "bdev_nvme_attach_controller", 00:19:34.879 "req_id": 1 00:19:34.879 } 00:19:34.879 Got JSON-RPC error response 00:19:34.879 response: 00:19:34.879 { 00:19:34.879 "code": -5, 00:19:34.879 "message": "Input/output error" 00:19:34.879 } 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:34.879 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.138 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:35.139 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:35.397 request: 00:19:35.397 { 00:19:35.397 "name": "nvme0", 00:19:35.397 "trtype": "tcp", 00:19:35.397 "traddr": "10.0.0.2", 00:19:35.397 "adrfam": "ipv4", 00:19:35.397 "trsvcid": "4420", 00:19:35.398 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:35.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:35.398 "prchk_reftag": false, 00:19:35.398 "prchk_guard": false, 00:19:35.398 "hdgst": false, 00:19:35.398 "ddgst": false, 00:19:35.398 "dhchap_key": "key0", 00:19:35.398 "dhchap_ctrlr_key": "key1", 00:19:35.398 "method": "bdev_nvme_attach_controller", 00:19:35.398 "req_id": 1 00:19:35.398 } 00:19:35.398 Got JSON-RPC error response 00:19:35.398 response: 00:19:35.398 { 00:19:35.398 "code": -5, 00:19:35.398 "message": "Input/output error" 00:19:35.398 } 00:19:35.398 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:35.398 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:35.398 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:35.398 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:35.398 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:35.398 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:35.656 00:19:35.656 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:35.656 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:35.656 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.656 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.656 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.656 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.926 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:35.926 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:35.926 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3886079 00:19:35.926 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3886079 ']' 00:19:35.926 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3886079 00:19:35.926 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:35.926 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:35.926 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3886079 00:19:35.926 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:35.926 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:35.926 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3886079' 00:19:35.926 killing process with pid 3886079 00:19:35.926 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3886079 00:19:35.926 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3886079 00:19:36.187 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:36.187 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:36.187 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:36.187 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:36.187 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:36.187 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:36.187 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:36.187 rmmod nvme_tcp 00:19:36.187 rmmod nvme_fabrics 00:19:36.446 rmmod nvme_keyring 00:19:36.446 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:36.446 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:36.446 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:36.446 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3906855 ']' 00:19:36.446 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3906855 00:19:36.446 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3906855 ']' 00:19:36.446 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3906855 00:19:36.446 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:36.446 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:36.447 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3906855 00:19:36.447 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:36.447 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:36.447 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3906855' 00:19:36.447 killing process with pid 3906855 00:19:36.447 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3906855 00:19:36.447 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3906855 00:19:36.705 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:36.705 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:36.705 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:36.705 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:36.705 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:36.705 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.705 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:36.705 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.611 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:38.611 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.G4h /tmp/spdk.key-sha256.s00 /tmp/spdk.key-sha384.Xj5 /tmp/spdk.key-sha512.nH2 /tmp/spdk.key-sha512.jVY /tmp/spdk.key-sha384.d6b /tmp/spdk.key-sha256.NBi '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:38.611 00:19:38.611 real 2m9.725s 00:19:38.611 user 4m48.523s 00:19:38.611 sys 0m28.990s 00:19:38.611 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.611 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.611 ************************************ 00:19:38.611 END TEST nvmf_auth_target 00:19:38.611 ************************************ 00:19:38.611 10:34:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:38.611 10:34:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:38.611 10:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:38.611 10:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:38.611 10:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:38.871 ************************************ 00:19:38.871 START TEST nvmf_bdevio_no_huge 00:19:38.871 ************************************ 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:38.871 * Looking for test storage... 00:19:38.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:38.871 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:45.448 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:45.448 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.448 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:45.448 Found net devices under 0000:af:00.0: cvl_0_0 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:45.448 Found net devices under 0000:af:00.1: cvl_0_1 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:45.448 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.449 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:45.449 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:45.449 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:45.449 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:45.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:19:45.708 00:19:45.708 --- 10.0.0.2 ping statistics --- 00:19:45.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.708 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:45.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:19:45.708 00:19:45.708 --- 10.0.0.1 ping statistics --- 00:19:45.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.708 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3911379 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3911379 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3911379 ']' 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:45.708 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:45.708 [2024-07-25 10:34:49.406346] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:19:45.709 [2024-07-25 10:34:49.406396] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:45.967 [2024-07-25 10:34:49.486068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:45.967 [2024-07-25 10:34:49.580939] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.967 [2024-07-25 10:34:49.580979] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.968 [2024-07-25 10:34:49.580989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.968 [2024-07-25 10:34:49.580997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.968 [2024-07-25 10:34:49.581004] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.968 [2024-07-25 10:34:49.581130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:45.968 [2024-07-25 10:34:49.581220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:45.968 [2024-07-25 10:34:49.581306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:45.968 [2024-07-25 10:34:49.581307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:46.535 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:46.535 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:19:46.535 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:46.535 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:46.535 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:46.793 [2024-07-25 10:34:50.268792] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:46.793 Malloc0 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.793 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.794 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:46.794 [2024-07-25 10:34:50.313588] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.794 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.794 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:46.794 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:46.794 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:46.794 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:46.794 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.794 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.794 { 00:19:46.794 "params": { 00:19:46.794 "name": "Nvme$subsystem", 00:19:46.794 "trtype": "$TEST_TRANSPORT", 00:19:46.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.794 "adrfam": "ipv4", 00:19:46.794 "trsvcid": "$NVMF_PORT", 00:19:46.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.794 "hdgst": ${hdgst:-false}, 00:19:46.794 "ddgst": ${ddgst:-false} 00:19:46.794 }, 00:19:46.794 "method": "bdev_nvme_attach_controller" 00:19:46.794 } 00:19:46.794 EOF 00:19:46.794 )") 00:19:46.794 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:46.794 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:46.794 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:46.794 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:46.794 "params": { 00:19:46.794 "name": "Nvme1", 00:19:46.794 "trtype": "tcp", 00:19:46.794 "traddr": "10.0.0.2", 00:19:46.794 "adrfam": "ipv4", 00:19:46.794 "trsvcid": "4420", 00:19:46.794 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.794 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.794 "hdgst": false, 00:19:46.794 "ddgst": false 00:19:46.794 }, 00:19:46.794 "method": "bdev_nvme_attach_controller" 00:19:46.794 }' 00:19:46.794 [2024-07-25 10:34:50.367098] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:19:46.794 [2024-07-25 10:34:50.367148] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3911660 ] 00:19:46.794 [2024-07-25 10:34:50.441955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:47.053 [2024-07-25 10:34:50.542168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.053 [2024-07-25 10:34:50.542263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.053 [2024-07-25 10:34:50.542265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.053 I/O targets: 00:19:47.053 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:47.053 00:19:47.053 00:19:47.053 CUnit - A unit testing framework for C - Version 2.1-3 00:19:47.053 http://cunit.sourceforge.net/ 00:19:47.053 00:19:47.053 00:19:47.053 Suite: bdevio tests on: Nvme1n1 00:19:47.312 Test: blockdev write read block ...passed 00:19:47.312 Test: blockdev write zeroes read block ...passed 00:19:47.312 Test: blockdev write zeroes read no split ...passed 00:19:47.312 Test: blockdev write zeroes read split ...passed 00:19:47.312 Test: blockdev write zeroes read split partial ...passed 00:19:47.312 Test: blockdev reset ...[2024-07-25 10:34:50.965925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:47.312 [2024-07-25 10:34:50.965993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107e670 (9): Bad file descriptor 00:19:47.312 [2024-07-25 10:34:50.981443] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:47.312 passed 00:19:47.312 Test: blockdev write read 8 blocks ...passed 00:19:47.312 Test: blockdev write read size > 128k ...passed 00:19:47.312 Test: blockdev write read invalid size ...passed 00:19:47.571 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:47.571 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:47.571 Test: blockdev write read max offset ...passed 00:19:47.571 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:47.571 Test: blockdev writev readv 8 blocks ...passed 00:19:47.571 Test: blockdev writev readv 30 x 1block ...passed 00:19:47.571 Test: blockdev writev readv block ...passed 00:19:47.571 Test: blockdev writev readv size > 128k ...passed 00:19:47.571 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:47.571 Test: blockdev comparev and writev ...[2024-07-25 10:34:51.157247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.571 [2024-07-25 10:34:51.157277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.571 [2024-07-25 10:34:51.157293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.571 [2024-07-25 10:34:51.157304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:47.571 [2024-07-25 10:34:51.157630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.571 [2024-07-25 10:34:51.157643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:47.571 [2024-07-25 10:34:51.157657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.571 [2024-07-25 10:34:51.157667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:47.571 [2024-07-25 10:34:51.157994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.571 [2024-07-25 10:34:51.158008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:47.571 [2024-07-25 10:34:51.158022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.571 [2024-07-25 10:34:51.158032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:47.571 [2024-07-25 10:34:51.158346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.571 [2024-07-25 10:34:51.158359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:47.571 [2024-07-25 10:34:51.158373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.571 [2024-07-25 10:34:51.158383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:47.571 passed 00:19:47.571 Test: blockdev nvme passthru rw ...passed 00:19:47.571 Test: blockdev nvme passthru vendor specific ...[2024-07-25 10:34:51.240244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.571 [2024-07-25 10:34:51.240264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:47.571 [2024-07-25 10:34:51.240466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.571 [2024-07-25 10:34:51.240478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:47.571 [2024-07-25 10:34:51.240674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.571 [2024-07-25 10:34:51.240689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:47.571 [2024-07-25 10:34:51.240890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.571 [2024-07-25 10:34:51.240903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:47.571 passed 00:19:47.571 Test: blockdev nvme admin passthru ...passed 00:19:47.833 Test: blockdev copy ...passed 00:19:47.833 00:19:47.833 Run Summary: Type Total Ran Passed Failed Inactive 00:19:47.833 suites 1 1 n/a 0 0 00:19:47.833 tests 23 23 23 0 0 00:19:47.833 asserts 152 152 152 0 n/a 00:19:47.833 00:19:47.833 Elapsed time = 1.112 seconds 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:48.093 rmmod nvme_tcp 00:19:48.093 rmmod nvme_fabrics 00:19:48.093 rmmod nvme_keyring 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3911379 ']' 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3911379 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3911379 ']' 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3911379 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3911379 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3911379' 00:19:48.093 killing process with pid 3911379 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3911379 00:19:48.093 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3911379 00:19:48.662 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:48.662 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:48.662 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:48.662 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:48.662 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:48.662 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.662 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:48.662 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.571 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:50.571 00:19:50.571 real 0m11.851s 00:19:50.571 user 0m13.729s 00:19:50.571 sys 0m6.373s 00:19:50.571 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.571 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:50.571 ************************************ 00:19:50.571 END TEST nvmf_bdevio_no_huge 00:19:50.571 ************************************ 00:19:50.571 10:34:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:50.571 10:34:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:50.571 10:34:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.571 10:34:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:50.571 ************************************ 00:19:50.571 START TEST nvmf_tls 00:19:50.571 ************************************ 00:19:50.571 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:50.831 * Looking for test storage... 00:19:50.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.831 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:50.832 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:57.421 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:57.422 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:57.422 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:57.422 Found net devices under 0000:af:00.0: cvl_0_0 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:57.422 Found net devices under 0000:af:00.1: cvl_0_1 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:57.422 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:57.422 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:57.422 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:57.422 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:57.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:19:57.422 00:19:57.422 --- 10.0.0.2 ping statistics --- 00:19:57.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.422 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:19:57.422 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:57.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:19:57.681 00:19:57.681 --- 10.0.0.1 ping statistics --- 00:19:57.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.681 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3915604 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3915604 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3915604 ']' 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.681 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.681 [2024-07-25 10:35:01.222872] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:19:57.681 [2024-07-25 10:35:01.222917] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.681 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.681 [2024-07-25 10:35:01.296924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.681 [2024-07-25 10:35:01.364156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.681 [2024-07-25 10:35:01.364197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.681 [2024-07-25 10:35:01.364206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.681 [2024-07-25 10:35:01.364215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.682 [2024-07-25 10:35:01.364221] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.682 [2024-07-25 10:35:01.364242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.616 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.616 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:58.616 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:58.616 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:58.616 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.616 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.616 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:58.616 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:58.616 true 00:19:58.616 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:58.616 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:58.875 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:58.875 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:58.875 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:58.875 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:58.875 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:59.133 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:59.133 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:59.133 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:59.392 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.392 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:59.392 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:59.392 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:59.392 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.392 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:59.651 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:59.651 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:59.651 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:59.908 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.908 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:59.908 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:59.908 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:59.908 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:00.165 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:00.165 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:00.424 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:00.424 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:00.424 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:00.424 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.El70dmCQVW 00:20:00.424 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:00.424 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.5C9quFN3EG 00:20:00.424 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:00.424 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:00.424 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.El70dmCQVW 00:20:00.424 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.5C9quFN3EG 00:20:00.424 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:00.682 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:00.941 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.El70dmCQVW 00:20:00.941 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.El70dmCQVW 00:20:00.941 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:00.941 [2024-07-25 10:35:04.619885] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.941 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:01.199 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:01.458 [2024-07-25 10:35:04.956753] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.458 [2024-07-25 10:35:04.956970] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.458 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:01.458 malloc0 00:20:01.458 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:01.716 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.El70dmCQVW 00:20:01.974 [2024-07-25 10:35:05.490416] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:01.974 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.El70dmCQVW 00:20:01.974 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.941 Initializing NVMe Controllers 00:20:11.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:11.941 Initialization complete. Launching workers. 00:20:11.941 ======================================================== 00:20:11.941 Latency(us) 00:20:11.941 Device Information : IOPS MiB/s Average min max 00:20:11.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16422.76 64.15 3897.44 830.90 5211.14 00:20:11.941 ======================================================== 00:20:11.941 Total : 16422.76 64.15 3897.44 830.90 5211.14 00:20:11.941 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.El70dmCQVW 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.El70dmCQVW' 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3918032 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3918032 /var/tmp/bdevperf.sock 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3918032 ']' 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:11.941 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.199 [2024-07-25 10:35:15.655075] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:20:12.199 [2024-07-25 10:35:15.655129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918032 ] 00:20:12.199 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.199 [2024-07-25 10:35:15.722006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.199 [2024-07-25 10:35:15.798208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.136 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:13.136 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:13.136 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.El70dmCQVW 00:20:13.136 [2024-07-25 10:35:16.635903] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.136 [2024-07-25 10:35:16.635975] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:13.136 TLSTESTn1 00:20:13.136 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:13.136 Running I/O for 10 seconds... 00:20:23.173 00:20:23.173 Latency(us) 00:20:23.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.173 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:23.173 Verification LBA range: start 0x0 length 0x2000 00:20:23.173 TLSTESTn1 : 10.03 4724.65 18.46 0.00 0.00 27039.18 6527.39 72561.46 00:20:23.173 =================================================================================================================== 00:20:23.173 Total : 4724.65 18.46 0.00 0.00 27039.18 6527.39 72561.46 00:20:23.173 0 00:20:23.433 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:23.433 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3918032 00:20:23.433 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3918032 ']' 00:20:23.433 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3918032 00:20:23.433 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:23.433 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:23.433 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3918032 00:20:23.433 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:23.433 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:23.433 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3918032' 00:20:23.433 killing process with pid 3918032 00:20:23.433 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3918032 00:20:23.433 Received shutdown signal, test time was about 10.000000 seconds 00:20:23.433 00:20:23.433 Latency(us) 00:20:23.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.433 =================================================================================================================== 00:20:23.433 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:23.433 [2024-07-25 10:35:26.933097] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:23.433 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3918032 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5C9quFN3EG 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5C9quFN3EG 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5C9quFN3EG 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5C9quFN3EG' 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3919880 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3919880 /var/tmp/bdevperf.sock 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3919880 ']' 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:23.433 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.693 [2024-07-25 10:35:27.164732] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:20:23.693 [2024-07-25 10:35:27.164786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919880 ] 00:20:23.693 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.693 [2024-07-25 10:35:27.231990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.693 [2024-07-25 10:35:27.298482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.261 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:24.261 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:24.521 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5C9quFN3EG 00:20:24.521 [2024-07-25 10:35:28.128011] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.521 [2024-07-25 10:35:28.128095] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:24.521 [2024-07-25 10:35:28.136604] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:24.521 [2024-07-25 10:35:28.137367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x63a5e0 (107): Transport endpoint is not connected 00:20:24.521 [2024-07-25 10:35:28.138361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x63a5e0 (9): Bad file descriptor 00:20:24.521 [2024-07-25 10:35:28.139362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:24.521 [2024-07-25 10:35:28.139377] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:24.521 [2024-07-25 10:35:28.139388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:24.521 request: 00:20:24.521 { 00:20:24.521 "name": "TLSTEST", 00:20:24.521 "trtype": "tcp", 00:20:24.521 "traddr": "10.0.0.2", 00:20:24.521 "adrfam": "ipv4", 00:20:24.521 "trsvcid": "4420", 00:20:24.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.521 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.521 "prchk_reftag": false, 00:20:24.521 "prchk_guard": false, 00:20:24.521 "hdgst": false, 00:20:24.521 "ddgst": false, 00:20:24.521 "psk": "/tmp/tmp.5C9quFN3EG", 00:20:24.521 "method": "bdev_nvme_attach_controller", 00:20:24.521 "req_id": 1 00:20:24.521 } 00:20:24.521 Got JSON-RPC error response 00:20:24.521 response: 00:20:24.521 { 00:20:24.521 "code": -5, 00:20:24.521 "message": "Input/output error" 00:20:24.521 } 00:20:24.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3919880 00:20:24.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3919880 ']' 00:20:24.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3919880 00:20:24.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:24.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:24.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3919880 00:20:24.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:24.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:24.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3919880' 00:20:24.521 killing process with pid 3919880 00:20:24.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3919880 00:20:24.521 Received shutdown signal, test time was about 10.000000 seconds 00:20:24.521 00:20:24.521 Latency(us) 00:20:24.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.521 =================================================================================================================== 00:20:24.521 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:24.521 [2024-07-25 10:35:28.211374] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:24.521 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3919880 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.El70dmCQVW 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.El70dmCQVW 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.El70dmCQVW 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.El70dmCQVW' 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3920152 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3920152 /var/tmp/bdevperf.sock 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3920152 ']' 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:24.781 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.781 [2024-07-25 10:35:28.432250] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:20:24.781 [2024-07-25 10:35:28.432297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920152 ] 00:20:24.781 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.041 [2024-07-25 10:35:28.497693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.041 [2024-07-25 10:35:28.562030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.610 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:25.610 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:25.610 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.El70dmCQVW 00:20:25.870 [2024-07-25 10:35:29.396462] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.870 [2024-07-25 10:35:29.396543] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:25.870 [2024-07-25 10:35:29.401079] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:25.870 [2024-07-25 10:35:29.401108] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:25.870 [2024-07-25 10:35:29.401135] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:25.870 [2024-07-25 10:35:29.401779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db95e0 (107): Transport endpoint is not connected 00:20:25.870 [2024-07-25 10:35:29.402770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db95e0 (9): Bad file descriptor 00:20:25.870 [2024-07-25 10:35:29.403771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:25.870 [2024-07-25 10:35:29.403784] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:25.870 [2024-07-25 10:35:29.403796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:25.870 request: 00:20:25.870 { 00:20:25.870 "name": "TLSTEST", 00:20:25.870 "trtype": "tcp", 00:20:25.870 "traddr": "10.0.0.2", 00:20:25.870 "adrfam": "ipv4", 00:20:25.870 "trsvcid": "4420", 00:20:25.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.870 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:25.870 "prchk_reftag": false, 00:20:25.870 "prchk_guard": false, 00:20:25.870 "hdgst": false, 00:20:25.870 "ddgst": false, 00:20:25.870 "psk": "/tmp/tmp.El70dmCQVW", 00:20:25.870 "method": "bdev_nvme_attach_controller", 00:20:25.870 "req_id": 1 00:20:25.870 } 00:20:25.870 Got JSON-RPC error response 00:20:25.870 response: 00:20:25.870 { 00:20:25.870 "code": -5, 00:20:25.870 "message": "Input/output error" 00:20:25.870 } 00:20:25.870 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3920152 00:20:25.870 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3920152 ']' 00:20:25.870 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3920152 00:20:25.870 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:25.870 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:25.870 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3920152 00:20:25.870 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:25.870 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:25.870 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3920152' 00:20:25.870 killing process with pid 3920152 00:20:25.870 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3920152 00:20:25.870 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.870 00:20:25.870 Latency(us) 00:20:25.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.870 =================================================================================================================== 00:20:25.870 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:25.870 [2024-07-25 10:35:29.479534] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:25.870 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3920152 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.El70dmCQVW 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.El70dmCQVW 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.El70dmCQVW 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.El70dmCQVW' 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3920412 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3920412 /var/tmp/bdevperf.sock 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3920412 ']' 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:26.130 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.130 [2024-07-25 10:35:29.701928] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:20:26.130 [2024-07-25 10:35:29.701984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920412 ] 00:20:26.130 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.130 [2024-07-25 10:35:29.767932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.389 [2024-07-25 10:35:29.834159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.958 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:26.958 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:26.958 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.El70dmCQVW 00:20:26.958 [2024-07-25 10:35:30.643816] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.958 [2024-07-25 10:35:30.643904] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:26.958 [2024-07-25 10:35:30.652858] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:26.958 [2024-07-25 10:35:30.652882] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:26.958 [2024-07-25 10:35:30.652908] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:26.958 [2024-07-25 10:35:30.653165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa815e0 (107): Transport endpoint is not connected 00:20:26.958 [2024-07-25 10:35:30.654157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa815e0 (9): Bad file descriptor 00:20:26.958 [2024-07-25 10:35:30.655158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:26.958 [2024-07-25 10:35:30.655171] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:26.958 [2024-07-25 10:35:30.655182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:26.958 request: 00:20:26.958 { 00:20:26.958 "name": "TLSTEST", 00:20:26.958 "trtype": "tcp", 00:20:26.959 "traddr": "10.0.0.2", 00:20:26.959 "adrfam": "ipv4", 00:20:26.959 "trsvcid": "4420", 00:20:26.959 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:26.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.959 "prchk_reftag": false, 00:20:26.959 "prchk_guard": false, 00:20:26.959 "hdgst": false, 00:20:26.959 "ddgst": false, 00:20:26.959 "psk": "/tmp/tmp.El70dmCQVW", 00:20:26.959 "method": "bdev_nvme_attach_controller", 00:20:26.959 "req_id": 1 00:20:26.959 } 00:20:26.959 Got JSON-RPC error response 00:20:26.959 response: 00:20:26.959 { 00:20:26.959 "code": -5, 00:20:26.959 "message": "Input/output error" 00:20:26.959 } 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3920412 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3920412 ']' 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3920412 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3920412 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3920412' 00:20:27.218 killing process with pid 3920412 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3920412 00:20:27.218 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.218 00:20:27.218 Latency(us) 00:20:27.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.218 =================================================================================================================== 00:20:27.218 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:27.218 [2024-07-25 10:35:30.725506] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3920412 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3920527 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3920527 /var/tmp/bdevperf.sock 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3920527 ']' 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:27.218 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.477 [2024-07-25 10:35:30.931529] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:20:27.477 [2024-07-25 10:35:30.931581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920527 ] 00:20:27.477 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.477 [2024-07-25 10:35:30.998095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.477 [2024-07-25 10:35:31.072666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.044 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:28.044 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:28.045 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:28.304 [2024-07-25 10:35:31.906072] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:28.304 [2024-07-25 10:35:31.907860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdceb50 (9): Bad file descriptor 00:20:28.304 [2024-07-25 10:35:31.908859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:28.304 [2024-07-25 10:35:31.908872] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:28.304 [2024-07-25 10:35:31.908882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:28.304 request: 00:20:28.304 { 00:20:28.304 "name": "TLSTEST", 00:20:28.304 "trtype": "tcp", 00:20:28.304 "traddr": "10.0.0.2", 00:20:28.304 "adrfam": "ipv4", 00:20:28.304 "trsvcid": "4420", 00:20:28.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.304 "prchk_reftag": false, 00:20:28.304 "prchk_guard": false, 00:20:28.304 "hdgst": false, 00:20:28.304 "ddgst": false, 00:20:28.304 "method": "bdev_nvme_attach_controller", 00:20:28.304 "req_id": 1 00:20:28.304 } 00:20:28.304 Got JSON-RPC error response 00:20:28.304 response: 00:20:28.304 { 00:20:28.304 "code": -5, 00:20:28.304 "message": "Input/output error" 00:20:28.304 } 00:20:28.304 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3920527 00:20:28.304 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3920527 ']' 00:20:28.304 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3920527 00:20:28.304 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:28.304 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:28.304 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3920527 00:20:28.304 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:28.304 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:28.304 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3920527' 00:20:28.304 killing process with pid 3920527 00:20:28.304 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3920527 00:20:28.304 Received shutdown signal, test time was about 10.000000 seconds 00:20:28.304 00:20:28.304 Latency(us) 00:20:28.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.304 =================================================================================================================== 00:20:28.304 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:28.304 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3920527 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 3915604 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3915604 ']' 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3915604 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3915604 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3915604' 00:20:28.563 killing process with pid 3915604 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3915604 00:20:28.563 [2024-07-25 10:35:32.211925] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:28.563 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3915604 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.p1UQ7icB8o 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.p1UQ7icB8o 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3920804 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3920804 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3920804 ']' 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:28.822 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.822 [2024-07-25 10:35:32.518286] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:20:28.822 [2024-07-25 10:35:32.518338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.081 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.081 [2024-07-25 10:35:32.593054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.081 [2024-07-25 10:35:32.662336] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.081 [2024-07-25 10:35:32.662378] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.081 [2024-07-25 10:35:32.662388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.081 [2024-07-25 10:35:32.662398] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.081 [2024-07-25 10:35:32.662405] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.081 [2024-07-25 10:35:32.662426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.647 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:29.647 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:29.647 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:29.647 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:29.647 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.647 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.647 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.p1UQ7icB8o 00:20:29.647 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.p1UQ7icB8o 00:20:29.904 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:29.904 [2024-07-25 10:35:33.505655] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.904 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:30.161 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:30.161 [2024-07-25 10:35:33.834493] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:30.161 [2024-07-25 10:35:33.834692] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.161 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:30.419 malloc0 00:20:30.419 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:30.676 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p1UQ7icB8o 00:20:30.676 [2024-07-25 10:35:34.332262] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:30.676 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p1UQ7icB8o 00:20:30.676 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:30.676 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:30.676 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:30.676 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.p1UQ7icB8o' 00:20:30.676 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.677 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:30.677 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3921227 00:20:30.677 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:30.677 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3921227 /var/tmp/bdevperf.sock 00:20:30.677 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3921227 ']' 00:20:30.677 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.677 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:30.677 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.677 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:30.677 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.677 [2024-07-25 10:35:34.377245] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:20:30.677 [2024-07-25 10:35:34.377293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921227 ] 00:20:30.936 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.936 [2024-07-25 10:35:34.443248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.936 [2024-07-25 10:35:34.517440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.504 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:31.504 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:31.504 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p1UQ7icB8o 00:20:31.763 [2024-07-25 10:35:35.336432] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.763 [2024-07-25 10:35:35.336508] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:31.763 TLSTESTn1 00:20:31.763 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:32.021 Running I/O for 10 seconds... 00:20:42.002 00:20:42.002 Latency(us) 00:20:42.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.002 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:42.002 Verification LBA range: start 0x0 length 0x2000 00:20:42.002 TLSTESTn1 : 10.02 4609.93 18.01 0.00 0.00 27715.62 5688.52 73819.75 00:20:42.002 =================================================================================================================== 00:20:42.002 Total : 4609.93 18.01 0.00 0.00 27715.62 5688.52 73819.75 00:20:42.002 0 00:20:42.002 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:42.002 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3921227 00:20:42.002 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3921227 ']' 00:20:42.002 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3921227 00:20:42.002 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:42.002 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:42.002 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3921227 00:20:42.002 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:42.002 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:42.002 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3921227' 00:20:42.002 killing process with pid 3921227 00:20:42.002 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3921227 00:20:42.002 Received shutdown signal, test time was about 10.000000 seconds 00:20:42.002 00:20:42.002 Latency(us) 00:20:42.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.002 =================================================================================================================== 00:20:42.002 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:42.002 [2024-07-25 10:35:45.651885] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:42.002 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3921227 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.p1UQ7icB8o 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p1UQ7icB8o 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p1UQ7icB8o 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p1UQ7icB8o 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.p1UQ7icB8o' 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3923119 00:20:42.296 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:42.297 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:42.297 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3923119 /var/tmp/bdevperf.sock 00:20:42.297 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3923119 ']' 00:20:42.297 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.297 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:42.297 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.297 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:42.297 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.297 [2024-07-25 10:35:45.884694] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:20:42.297 [2024-07-25 10:35:45.884752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923119 ] 00:20:42.297 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.297 [2024-07-25 10:35:45.949648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.556 [2024-07-25 10:35:46.019369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.124 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:43.124 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:43.125 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p1UQ7icB8o 00:20:43.384 [2024-07-25 10:35:46.846017] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.384 [2024-07-25 10:35:46.846064] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:43.384 [2024-07-25 10:35:46.846075] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.p1UQ7icB8o 00:20:43.384 request: 00:20:43.384 { 00:20:43.384 "name": "TLSTEST", 00:20:43.384 "trtype": "tcp", 00:20:43.384 "traddr": "10.0.0.2", 00:20:43.384 "adrfam": "ipv4", 00:20:43.384 "trsvcid": "4420", 00:20:43.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:43.384 "prchk_reftag": false, 00:20:43.384 "prchk_guard": false, 00:20:43.384 "hdgst": false, 00:20:43.384 "ddgst": false, 00:20:43.384 "psk": "/tmp/tmp.p1UQ7icB8o", 00:20:43.384 "method": "bdev_nvme_attach_controller", 00:20:43.384 "req_id": 1 00:20:43.384 } 00:20:43.384 Got JSON-RPC error response 00:20:43.384 response: 00:20:43.384 { 00:20:43.384 "code": -1, 00:20:43.384 "message": "Operation not permitted" 00:20:43.384 } 00:20:43.384 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3923119 00:20:43.384 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3923119 ']' 00:20:43.384 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3923119 00:20:43.384 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:43.384 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:43.384 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3923119 00:20:43.384 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:43.384 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:43.384 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3923119' 00:20:43.384 killing process with pid 3923119 00:20:43.384 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3923119 00:20:43.384 Received shutdown signal, test time was about 10.000000 seconds 00:20:43.384 00:20:43.384 Latency(us) 00:20:43.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.384 =================================================================================================================== 00:20:43.384 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:43.384 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3923119 00:20:43.384 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:43.384 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:43.384 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:43.384 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:43.384 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:43.384 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 3920804 00:20:43.384 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3920804 ']' 00:20:43.385 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3920804 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3920804 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3920804' 00:20:43.645 killing process with pid 3920804 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3920804 00:20:43.645 [2024-07-25 10:35:47.128213] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3920804 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3923377 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3923377 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3923377 ']' 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.645 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.904 [2024-07-25 10:35:47.374428] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:20:43.904 [2024-07-25 10:35:47.374478] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.904 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.904 [2024-07-25 10:35:47.447520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.904 [2024-07-25 10:35:47.519189] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.904 [2024-07-25 10:35:47.519226] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.904 [2024-07-25 10:35:47.519236] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.904 [2024-07-25 10:35:47.519245] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.904 [2024-07-25 10:35:47.519252] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.904 [2024-07-25 10:35:47.519272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.471 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:44.471 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:44.471 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:44.471 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:44.471 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.728 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.728 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.p1UQ7icB8o 00:20:44.728 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:44.728 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.p1UQ7icB8o 00:20:44.728 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:44.728 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:44.728 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:44.728 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:44.728 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.p1UQ7icB8o 00:20:44.728 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.p1UQ7icB8o 00:20:44.728 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:44.728 [2024-07-25 10:35:48.377520] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.728 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:44.986 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:45.244 [2024-07-25 10:35:48.726407] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:45.244 [2024-07-25 10:35:48.726603] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.244 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:45.244 malloc0 00:20:45.244 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:45.502 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p1UQ7icB8o 00:20:45.760 [2024-07-25 10:35:49.223910] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:45.760 [2024-07-25 10:35:49.223934] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:45.760 [2024-07-25 10:35:49.223957] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:45.760 request: 00:20:45.760 { 00:20:45.760 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.760 "host": "nqn.2016-06.io.spdk:host1", 00:20:45.760 "psk": "/tmp/tmp.p1UQ7icB8o", 00:20:45.760 "method": "nvmf_subsystem_add_host", 00:20:45.760 "req_id": 1 00:20:45.760 } 00:20:45.760 Got JSON-RPC error response 00:20:45.760 response: 00:20:45.760 { 00:20:45.760 "code": -32603, 00:20:45.760 "message": "Internal error" 00:20:45.760 } 00:20:45.760 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:45.760 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:45.760 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:45.760 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:45.760 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 3923377 00:20:45.760 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3923377 ']' 00:20:45.760 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3923377 00:20:45.760 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:45.760 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:45.760 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3923377 00:20:45.760 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:45.760 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:45.760 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3923377' 00:20:45.760 killing process with pid 3923377 00:20:45.760 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3923377 00:20:45.760 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3923377 00:20:46.019 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.p1UQ7icB8o 00:20:46.019 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:46.020 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:46.020 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:46.020 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.020 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3923698 00:20:46.020 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:46.020 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3923698 00:20:46.020 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3923698 ']' 00:20:46.020 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.020 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:46.020 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.020 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:46.020 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.020 [2024-07-25 10:35:49.547289] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:20:46.020 [2024-07-25 10:35:49.547340] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.020 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.020 [2024-07-25 10:35:49.619655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.020 [2024-07-25 10:35:49.691652] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.020 [2024-07-25 10:35:49.691688] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.020 [2024-07-25 10:35:49.691698] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.020 [2024-07-25 10:35:49.691706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.020 [2024-07-25 10:35:49.691719] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.020 [2024-07-25 10:35:49.691741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.956 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:46.956 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:46.956 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:46.956 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:46.956 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.956 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.956 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.p1UQ7icB8o 00:20:46.956 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.p1UQ7icB8o 00:20:46.956 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:46.956 [2024-07-25 10:35:50.554802] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.956 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:47.215 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:47.215 [2024-07-25 10:35:50.883622] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:47.215 [2024-07-25 10:35:50.883818] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.215 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:47.474 malloc0 00:20:47.474 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:47.732 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p1UQ7icB8o 00:20:47.732 [2024-07-25 10:35:51.369095] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:47.732 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3923991 00:20:47.732 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:47.732 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:47.732 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3923991 /var/tmp/bdevperf.sock 00:20:47.732 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3923991 ']' 00:20:47.732 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.732 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:47.732 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.732 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:47.732 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.732 [2024-07-25 10:35:51.429267] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:20:47.732 [2024-07-25 10:35:51.429319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923991 ] 00:20:47.991 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.991 [2024-07-25 10:35:51.495665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.991 [2024-07-25 10:35:51.569098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.559 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:48.559 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:48.559 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p1UQ7icB8o 00:20:48.818 [2024-07-25 10:35:52.355875] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.818 [2024-07-25 10:35:52.355944] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:48.818 TLSTESTn1 00:20:48.818 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:49.077 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:49.077 "subsystems": [ 00:20:49.077 { 00:20:49.077 "subsystem": "keyring", 00:20:49.077 "config": [] 00:20:49.077 }, 00:20:49.077 { 00:20:49.077 "subsystem": "iobuf", 00:20:49.077 "config": [ 00:20:49.077 { 00:20:49.077 "method": "iobuf_set_options", 00:20:49.077 "params": { 00:20:49.077 "small_pool_count": 8192, 00:20:49.077 "large_pool_count": 1024, 00:20:49.077 "small_bufsize": 8192, 00:20:49.077 "large_bufsize": 135168 00:20:49.077 } 00:20:49.077 } 00:20:49.077 ] 00:20:49.077 }, 00:20:49.077 { 00:20:49.077 "subsystem": "sock", 00:20:49.077 "config": [ 00:20:49.077 { 00:20:49.077 "method": "sock_set_default_impl", 00:20:49.077 "params": { 00:20:49.077 "impl_name": "posix" 00:20:49.077 } 00:20:49.077 }, 00:20:49.077 { 00:20:49.077 "method": "sock_impl_set_options", 00:20:49.077 "params": { 00:20:49.077 "impl_name": "ssl", 00:20:49.077 "recv_buf_size": 4096, 00:20:49.077 "send_buf_size": 4096, 00:20:49.077 "enable_recv_pipe": true, 00:20:49.077 "enable_quickack": false, 00:20:49.077 "enable_placement_id": 0, 00:20:49.077 "enable_zerocopy_send_server": true, 00:20:49.077 "enable_zerocopy_send_client": false, 00:20:49.077 "zerocopy_threshold": 0, 00:20:49.077 "tls_version": 0, 00:20:49.077 "enable_ktls": false 00:20:49.077 } 00:20:49.077 }, 00:20:49.077 { 00:20:49.077 "method": "sock_impl_set_options", 00:20:49.077 "params": { 00:20:49.077 "impl_name": "posix", 00:20:49.077 "recv_buf_size": 2097152, 00:20:49.077 "send_buf_size": 2097152, 00:20:49.077 "enable_recv_pipe": true, 00:20:49.077 "enable_quickack": false, 00:20:49.077 "enable_placement_id": 0, 00:20:49.077 "enable_zerocopy_send_server": true, 00:20:49.077 "enable_zerocopy_send_client": false, 00:20:49.077 "zerocopy_threshold": 0, 00:20:49.077 "tls_version": 0, 00:20:49.077 "enable_ktls": false 00:20:49.077 } 00:20:49.077 } 00:20:49.077 ] 00:20:49.077 }, 00:20:49.077 { 00:20:49.077 "subsystem": "vmd", 00:20:49.077 "config": [] 00:20:49.077 }, 00:20:49.077 { 00:20:49.077 "subsystem": "accel", 00:20:49.077 "config": [ 00:20:49.077 { 00:20:49.077 "method": "accel_set_options", 00:20:49.077 "params": { 00:20:49.077 "small_cache_size": 128, 00:20:49.077 "large_cache_size": 16, 00:20:49.077 "task_count": 2048, 00:20:49.077 "sequence_count": 2048, 00:20:49.077 "buf_count": 2048 00:20:49.077 } 00:20:49.077 } 00:20:49.077 ] 00:20:49.077 }, 00:20:49.077 { 00:20:49.077 "subsystem": "bdev", 00:20:49.077 "config": [ 00:20:49.077 { 00:20:49.077 "method": "bdev_set_options", 00:20:49.077 "params": { 00:20:49.077 "bdev_io_pool_size": 65535, 00:20:49.077 "bdev_io_cache_size": 256, 00:20:49.077 "bdev_auto_examine": true, 00:20:49.077 "iobuf_small_cache_size": 128, 00:20:49.077 "iobuf_large_cache_size": 16 00:20:49.077 } 00:20:49.077 }, 00:20:49.077 { 00:20:49.077 "method": "bdev_raid_set_options", 00:20:49.077 "params": { 00:20:49.077 "process_window_size_kb": 1024, 00:20:49.077 "process_max_bandwidth_mb_sec": 0 00:20:49.077 } 00:20:49.077 }, 00:20:49.077 { 00:20:49.077 "method": "bdev_iscsi_set_options", 00:20:49.077 "params": { 00:20:49.077 "timeout_sec": 30 00:20:49.077 } 00:20:49.077 }, 00:20:49.077 { 00:20:49.077 "method": "bdev_nvme_set_options", 00:20:49.077 "params": { 00:20:49.077 "action_on_timeout": "none", 00:20:49.077 "timeout_us": 0, 00:20:49.077 "timeout_admin_us": 0, 00:20:49.077 "keep_alive_timeout_ms": 10000, 00:20:49.077 "arbitration_burst": 0, 00:20:49.077 "low_priority_weight": 0, 00:20:49.077 "medium_priority_weight": 0, 00:20:49.077 "high_priority_weight": 0, 00:20:49.077 "nvme_adminq_poll_period_us": 10000, 00:20:49.077 "nvme_ioq_poll_period_us": 0, 00:20:49.077 "io_queue_requests": 0, 00:20:49.077 "delay_cmd_submit": true, 00:20:49.078 "transport_retry_count": 4, 00:20:49.078 "bdev_retry_count": 3, 00:20:49.078 "transport_ack_timeout": 0, 00:20:49.078 "ctrlr_loss_timeout_sec": 0, 00:20:49.078 "reconnect_delay_sec": 0, 00:20:49.078 "fast_io_fail_timeout_sec": 0, 00:20:49.078 "disable_auto_failback": false, 00:20:49.078 "generate_uuids": false, 00:20:49.078 "transport_tos": 0, 00:20:49.078 "nvme_error_stat": false, 00:20:49.078 "rdma_srq_size": 0, 00:20:49.078 "io_path_stat": false, 00:20:49.078 "allow_accel_sequence": false, 00:20:49.078 "rdma_max_cq_size": 0, 00:20:49.078 "rdma_cm_event_timeout_ms": 0, 00:20:49.078 "dhchap_digests": [ 00:20:49.078 "sha256", 00:20:49.078 "sha384", 00:20:49.078 "sha512" 00:20:49.078 ], 00:20:49.078 "dhchap_dhgroups": [ 00:20:49.078 "null", 00:20:49.078 "ffdhe2048", 00:20:49.078 "ffdhe3072", 00:20:49.078 "ffdhe4096", 00:20:49.078 "ffdhe6144", 00:20:49.078 "ffdhe8192" 00:20:49.078 ] 00:20:49.078 } 00:20:49.078 }, 00:20:49.078 { 00:20:49.078 "method": "bdev_nvme_set_hotplug", 00:20:49.078 "params": { 00:20:49.078 "period_us": 100000, 00:20:49.078 "enable": false 00:20:49.078 } 00:20:49.078 }, 00:20:49.078 { 00:20:49.078 "method": "bdev_malloc_create", 00:20:49.078 "params": { 00:20:49.078 "name": "malloc0", 00:20:49.078 "num_blocks": 8192, 00:20:49.078 "block_size": 4096, 00:20:49.078 "physical_block_size": 4096, 00:20:49.078 "uuid": "2e190962-3750-49de-b675-4f866aa775bd", 00:20:49.078 "optimal_io_boundary": 0, 00:20:49.078 "md_size": 0, 00:20:49.078 "dif_type": 0, 00:20:49.078 "dif_is_head_of_md": false, 00:20:49.078 "dif_pi_format": 0 00:20:49.078 } 00:20:49.078 }, 00:20:49.078 { 00:20:49.078 "method": "bdev_wait_for_examine" 00:20:49.078 } 00:20:49.078 ] 00:20:49.078 }, 00:20:49.078 { 00:20:49.078 "subsystem": "nbd", 00:20:49.078 "config": [] 00:20:49.078 }, 00:20:49.078 { 00:20:49.078 "subsystem": "scheduler", 00:20:49.078 "config": [ 00:20:49.078 { 00:20:49.078 "method": "framework_set_scheduler", 00:20:49.078 "params": { 00:20:49.078 "name": "static" 00:20:49.078 } 00:20:49.078 } 00:20:49.078 ] 00:20:49.078 }, 00:20:49.078 { 00:20:49.078 "subsystem": "nvmf", 00:20:49.078 "config": [ 00:20:49.078 { 00:20:49.078 "method": "nvmf_set_config", 00:20:49.078 "params": { 00:20:49.078 "discovery_filter": "match_any", 00:20:49.078 "admin_cmd_passthru": { 00:20:49.078 "identify_ctrlr": false 00:20:49.078 } 00:20:49.078 } 00:20:49.078 }, 00:20:49.078 { 00:20:49.078 "method": "nvmf_set_max_subsystems", 00:20:49.078 "params": { 00:20:49.078 "max_subsystems": 1024 00:20:49.078 } 00:20:49.078 }, 00:20:49.078 { 00:20:49.078 "method": "nvmf_set_crdt", 00:20:49.078 "params": { 00:20:49.078 "crdt1": 0, 00:20:49.078 "crdt2": 0, 00:20:49.078 "crdt3": 0 00:20:49.078 } 00:20:49.078 }, 00:20:49.078 { 00:20:49.078 "method": "nvmf_create_transport", 00:20:49.078 "params": { 00:20:49.078 "trtype": "TCP", 00:20:49.078 "max_queue_depth": 128, 00:20:49.078 "max_io_qpairs_per_ctrlr": 127, 00:20:49.078 "in_capsule_data_size": 4096, 00:20:49.078 "max_io_size": 131072, 00:20:49.078 "io_unit_size": 131072, 00:20:49.078 "max_aq_depth": 128, 00:20:49.078 "num_shared_buffers": 511, 00:20:49.078 "buf_cache_size": 4294967295, 00:20:49.078 "dif_insert_or_strip": false, 00:20:49.078 "zcopy": false, 00:20:49.078 "c2h_success": false, 00:20:49.078 "sock_priority": 0, 00:20:49.078 "abort_timeout_sec": 1, 00:20:49.078 "ack_timeout": 0, 00:20:49.078 "data_wr_pool_size": 0 00:20:49.078 } 00:20:49.078 }, 00:20:49.078 { 00:20:49.078 "method": "nvmf_create_subsystem", 00:20:49.078 "params": { 00:20:49.078 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.078 "allow_any_host": false, 00:20:49.078 "serial_number": "SPDK00000000000001", 00:20:49.078 "model_number": "SPDK bdev Controller", 00:20:49.078 "max_namespaces": 10, 00:20:49.078 "min_cntlid": 1, 00:20:49.078 "max_cntlid": 65519, 00:20:49.078 "ana_reporting": false 00:20:49.078 } 00:20:49.078 }, 00:20:49.078 { 00:20:49.078 "method": "nvmf_subsystem_add_host", 00:20:49.078 "params": { 00:20:49.078 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.078 "host": "nqn.2016-06.io.spdk:host1", 00:20:49.078 "psk": "/tmp/tmp.p1UQ7icB8o" 00:20:49.078 } 00:20:49.078 }, 00:20:49.078 { 00:20:49.078 "method": "nvmf_subsystem_add_ns", 00:20:49.078 "params": { 00:20:49.078 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.078 "namespace": { 00:20:49.078 "nsid": 1, 00:20:49.078 "bdev_name": "malloc0", 00:20:49.078 "nguid": "2E190962375049DEB6754F866AA775BD", 00:20:49.078 "uuid": "2e190962-3750-49de-b675-4f866aa775bd", 00:20:49.078 "no_auto_visible": false 00:20:49.078 } 00:20:49.078 } 00:20:49.078 }, 00:20:49.078 { 00:20:49.078 "method": "nvmf_subsystem_add_listener", 00:20:49.078 "params": { 00:20:49.078 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.078 "listen_address": { 00:20:49.078 "trtype": "TCP", 00:20:49.078 "adrfam": "IPv4", 00:20:49.078 "traddr": "10.0.0.2", 00:20:49.078 "trsvcid": "4420" 00:20:49.078 }, 00:20:49.078 "secure_channel": true 00:20:49.078 } 00:20:49.078 } 00:20:49.078 ] 00:20:49.078 } 00:20:49.078 ] 00:20:49.078 }' 00:20:49.078 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:49.338 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:49.338 "subsystems": [ 00:20:49.338 { 00:20:49.338 "subsystem": "keyring", 00:20:49.338 "config": [] 00:20:49.338 }, 00:20:49.338 { 00:20:49.338 "subsystem": "iobuf", 00:20:49.338 "config": [ 00:20:49.338 { 00:20:49.338 "method": "iobuf_set_options", 00:20:49.338 "params": { 00:20:49.338 "small_pool_count": 8192, 00:20:49.338 "large_pool_count": 1024, 00:20:49.338 "small_bufsize": 8192, 00:20:49.338 "large_bufsize": 135168 00:20:49.338 } 00:20:49.338 } 00:20:49.338 ] 00:20:49.338 }, 00:20:49.338 { 00:20:49.338 "subsystem": "sock", 00:20:49.338 "config": [ 00:20:49.338 { 00:20:49.338 "method": "sock_set_default_impl", 00:20:49.338 "params": { 00:20:49.338 "impl_name": "posix" 00:20:49.338 } 00:20:49.338 }, 00:20:49.338 { 00:20:49.338 "method": "sock_impl_set_options", 00:20:49.338 "params": { 00:20:49.338 "impl_name": "ssl", 00:20:49.338 "recv_buf_size": 4096, 00:20:49.338 "send_buf_size": 4096, 00:20:49.338 "enable_recv_pipe": true, 00:20:49.338 "enable_quickack": false, 00:20:49.338 "enable_placement_id": 0, 00:20:49.338 "enable_zerocopy_send_server": true, 00:20:49.338 "enable_zerocopy_send_client": false, 00:20:49.338 "zerocopy_threshold": 0, 00:20:49.338 "tls_version": 0, 00:20:49.338 "enable_ktls": false 00:20:49.338 } 00:20:49.338 }, 00:20:49.338 { 00:20:49.338 "method": "sock_impl_set_options", 00:20:49.338 "params": { 00:20:49.338 "impl_name": "posix", 00:20:49.338 "recv_buf_size": 2097152, 00:20:49.338 "send_buf_size": 2097152, 00:20:49.338 "enable_recv_pipe": true, 00:20:49.338 "enable_quickack": false, 00:20:49.338 "enable_placement_id": 0, 00:20:49.338 "enable_zerocopy_send_server": true, 00:20:49.338 "enable_zerocopy_send_client": false, 00:20:49.338 "zerocopy_threshold": 0, 00:20:49.338 "tls_version": 0, 00:20:49.338 "enable_ktls": false 00:20:49.338 } 00:20:49.338 } 00:20:49.338 ] 00:20:49.338 }, 00:20:49.338 { 00:20:49.338 "subsystem": "vmd", 00:20:49.338 "config": [] 00:20:49.338 }, 00:20:49.338 { 00:20:49.338 "subsystem": "accel", 00:20:49.338 "config": [ 00:20:49.338 { 00:20:49.338 "method": "accel_set_options", 00:20:49.338 "params": { 00:20:49.338 "small_cache_size": 128, 00:20:49.338 "large_cache_size": 16, 00:20:49.338 "task_count": 2048, 00:20:49.338 "sequence_count": 2048, 00:20:49.338 "buf_count": 2048 00:20:49.338 } 00:20:49.338 } 00:20:49.338 ] 00:20:49.338 }, 00:20:49.338 { 00:20:49.338 "subsystem": "bdev", 00:20:49.338 "config": [ 00:20:49.338 { 00:20:49.338 "method": "bdev_set_options", 00:20:49.338 "params": { 00:20:49.338 "bdev_io_pool_size": 65535, 00:20:49.338 "bdev_io_cache_size": 256, 00:20:49.338 "bdev_auto_examine": true, 00:20:49.338 "iobuf_small_cache_size": 128, 00:20:49.338 "iobuf_large_cache_size": 16 00:20:49.338 } 00:20:49.338 }, 00:20:49.338 { 00:20:49.338 "method": "bdev_raid_set_options", 00:20:49.338 "params": { 00:20:49.338 "process_window_size_kb": 1024, 00:20:49.338 "process_max_bandwidth_mb_sec": 0 00:20:49.338 } 00:20:49.338 }, 00:20:49.338 { 00:20:49.338 "method": "bdev_iscsi_set_options", 00:20:49.338 "params": { 00:20:49.338 "timeout_sec": 30 00:20:49.338 } 00:20:49.338 }, 00:20:49.338 { 00:20:49.338 "method": "bdev_nvme_set_options", 00:20:49.338 "params": { 00:20:49.338 "action_on_timeout": "none", 00:20:49.338 "timeout_us": 0, 00:20:49.338 "timeout_admin_us": 0, 00:20:49.338 "keep_alive_timeout_ms": 10000, 00:20:49.338 "arbitration_burst": 0, 00:20:49.338 "low_priority_weight": 0, 00:20:49.338 "medium_priority_weight": 0, 00:20:49.338 "high_priority_weight": 0, 00:20:49.338 "nvme_adminq_poll_period_us": 10000, 00:20:49.338 "nvme_ioq_poll_period_us": 0, 00:20:49.338 "io_queue_requests": 512, 00:20:49.338 "delay_cmd_submit": true, 00:20:49.338 "transport_retry_count": 4, 00:20:49.338 "bdev_retry_count": 3, 00:20:49.338 "transport_ack_timeout": 0, 00:20:49.338 "ctrlr_loss_timeout_sec": 0, 00:20:49.338 "reconnect_delay_sec": 0, 00:20:49.338 "fast_io_fail_timeout_sec": 0, 00:20:49.338 "disable_auto_failback": false, 00:20:49.338 "generate_uuids": false, 00:20:49.338 "transport_tos": 0, 00:20:49.338 "nvme_error_stat": false, 00:20:49.338 "rdma_srq_size": 0, 00:20:49.338 "io_path_stat": false, 00:20:49.338 "allow_accel_sequence": false, 00:20:49.338 "rdma_max_cq_size": 0, 00:20:49.338 "rdma_cm_event_timeout_ms": 0, 00:20:49.338 "dhchap_digests": [ 00:20:49.338 "sha256", 00:20:49.338 "sha384", 00:20:49.338 "sha512" 00:20:49.338 ], 00:20:49.338 "dhchap_dhgroups": [ 00:20:49.338 "null", 00:20:49.338 "ffdhe2048", 00:20:49.338 "ffdhe3072", 00:20:49.338 "ffdhe4096", 00:20:49.338 "ffdhe6144", 00:20:49.338 "ffdhe8192" 00:20:49.338 ] 00:20:49.338 } 00:20:49.338 }, 00:20:49.338 { 00:20:49.338 "method": "bdev_nvme_attach_controller", 00:20:49.338 "params": { 00:20:49.338 "name": "TLSTEST", 00:20:49.338 "trtype": "TCP", 00:20:49.338 "adrfam": "IPv4", 00:20:49.338 "traddr": "10.0.0.2", 00:20:49.338 "trsvcid": "4420", 00:20:49.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.338 "prchk_reftag": false, 00:20:49.338 "prchk_guard": false, 00:20:49.338 "ctrlr_loss_timeout_sec": 0, 00:20:49.338 "reconnect_delay_sec": 0, 00:20:49.338 "fast_io_fail_timeout_sec": 0, 00:20:49.338 "psk": "/tmp/tmp.p1UQ7icB8o", 00:20:49.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.338 "hdgst": false, 00:20:49.338 "ddgst": false 00:20:49.338 } 00:20:49.338 }, 00:20:49.338 { 00:20:49.338 "method": "bdev_nvme_set_hotplug", 00:20:49.338 "params": { 00:20:49.338 "period_us": 100000, 00:20:49.338 "enable": false 00:20:49.338 } 00:20:49.339 }, 00:20:49.339 { 00:20:49.339 "method": "bdev_wait_for_examine" 00:20:49.339 } 00:20:49.339 ] 00:20:49.339 }, 00:20:49.339 { 00:20:49.339 "subsystem": "nbd", 00:20:49.339 "config": [] 00:20:49.339 } 00:20:49.339 ] 00:20:49.339 }' 00:20:49.339 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 3923991 00:20:49.339 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3923991 ']' 00:20:49.339 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3923991 00:20:49.339 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:49.339 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:49.339 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3923991 00:20:49.339 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:49.339 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:49.339 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3923991' 00:20:49.339 killing process with pid 3923991 00:20:49.339 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3923991 00:20:49.339 Received shutdown signal, test time was about 10.000000 seconds 00:20:49.339 00:20:49.339 Latency(us) 00:20:49.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.339 =================================================================================================================== 00:20:49.339 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:49.339 [2024-07-25 10:35:53.023996] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:49.339 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3923991 00:20:49.598 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 3923698 00:20:49.598 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3923698 ']' 00:20:49.598 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3923698 00:20:49.598 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:49.598 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:49.598 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3923698 00:20:49.598 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:49.598 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:49.598 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3923698' 00:20:49.598 killing process with pid 3923698 00:20:49.598 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3923698 00:20:49.598 [2024-07-25 10:35:53.256831] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:49.598 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3923698 00:20:49.858 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:49.858 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:49.858 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:49.858 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:49.858 "subsystems": [ 00:20:49.859 { 00:20:49.859 "subsystem": "keyring", 00:20:49.859 "config": [] 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "subsystem": "iobuf", 00:20:49.859 "config": [ 00:20:49.859 { 00:20:49.859 "method": "iobuf_set_options", 00:20:49.859 "params": { 00:20:49.859 "small_pool_count": 8192, 00:20:49.859 "large_pool_count": 1024, 00:20:49.859 "small_bufsize": 8192, 00:20:49.859 "large_bufsize": 135168 00:20:49.859 } 00:20:49.859 } 00:20:49.859 ] 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "subsystem": "sock", 00:20:49.859 "config": [ 00:20:49.859 { 00:20:49.859 "method": "sock_set_default_impl", 00:20:49.859 "params": { 00:20:49.859 "impl_name": "posix" 00:20:49.859 } 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "method": "sock_impl_set_options", 00:20:49.859 "params": { 00:20:49.859 "impl_name": "ssl", 00:20:49.859 "recv_buf_size": 4096, 00:20:49.859 "send_buf_size": 4096, 00:20:49.859 "enable_recv_pipe": true, 00:20:49.859 "enable_quickack": false, 00:20:49.859 "enable_placement_id": 0, 00:20:49.859 "enable_zerocopy_send_server": true, 00:20:49.859 "enable_zerocopy_send_client": false, 00:20:49.859 "zerocopy_threshold": 0, 00:20:49.859 "tls_version": 0, 00:20:49.859 "enable_ktls": false 00:20:49.859 } 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "method": "sock_impl_set_options", 00:20:49.859 "params": { 00:20:49.859 "impl_name": "posix", 00:20:49.859 "recv_buf_size": 2097152, 00:20:49.859 "send_buf_size": 2097152, 00:20:49.859 "enable_recv_pipe": true, 00:20:49.859 "enable_quickack": false, 00:20:49.859 "enable_placement_id": 0, 00:20:49.859 "enable_zerocopy_send_server": true, 00:20:49.859 "enable_zerocopy_send_client": false, 00:20:49.859 "zerocopy_threshold": 0, 00:20:49.859 "tls_version": 0, 00:20:49.859 "enable_ktls": false 00:20:49.859 } 00:20:49.859 } 00:20:49.859 ] 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "subsystem": "vmd", 00:20:49.859 "config": [] 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "subsystem": "accel", 00:20:49.859 "config": [ 00:20:49.859 { 00:20:49.859 "method": "accel_set_options", 00:20:49.859 "params": { 00:20:49.859 "small_cache_size": 128, 00:20:49.859 "large_cache_size": 16, 00:20:49.859 "task_count": 2048, 00:20:49.859 "sequence_count": 2048, 00:20:49.859 "buf_count": 2048 00:20:49.859 } 00:20:49.859 } 00:20:49.859 ] 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "subsystem": "bdev", 00:20:49.859 "config": [ 00:20:49.859 { 00:20:49.859 "method": "bdev_set_options", 00:20:49.859 "params": { 00:20:49.859 "bdev_io_pool_size": 65535, 00:20:49.859 "bdev_io_cache_size": 256, 00:20:49.859 "bdev_auto_examine": true, 00:20:49.859 "iobuf_small_cache_size": 128, 00:20:49.859 "iobuf_large_cache_size": 16 00:20:49.859 } 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "method": "bdev_raid_set_options", 00:20:49.859 "params": { 00:20:49.859 "process_window_size_kb": 1024, 00:20:49.859 "process_max_bandwidth_mb_sec": 0 00:20:49.859 } 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "method": "bdev_iscsi_set_options", 00:20:49.859 "params": { 00:20:49.859 "timeout_sec": 30 00:20:49.859 } 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "method": "bdev_nvme_set_options", 00:20:49.859 "params": { 00:20:49.859 "action_on_timeout": "none", 00:20:49.859 "timeout_us": 0, 00:20:49.859 "timeout_admin_us": 0, 00:20:49.859 "keep_alive_timeout_ms": 10000, 00:20:49.859 "arbitration_burst": 0, 00:20:49.859 "low_priority_weight": 0, 00:20:49.859 "medium_priority_weight": 0, 00:20:49.859 "high_priority_weight": 0, 00:20:49.859 "nvme_adminq_poll_period_us": 10000, 00:20:49.859 "nvme_ioq_poll_period_us": 0, 00:20:49.859 "io_queue_requests": 0, 00:20:49.859 "delay_cmd_submit": true, 00:20:49.859 "transport_retry_count": 4, 00:20:49.859 "bdev_retry_count": 3, 00:20:49.859 "transport_ack_timeout": 0, 00:20:49.859 "ctrlr_loss_timeout_sec": 0, 00:20:49.859 "reconnect_delay_sec": 0, 00:20:49.859 "fast_io_fail_timeout_sec": 0, 00:20:49.859 "disable_auto_failback": false, 00:20:49.859 "generate_uuids": false, 00:20:49.859 "transport_tos": 0, 00:20:49.859 "nvme_error_stat": false, 00:20:49.859 "rdma_srq_size": 0, 00:20:49.859 "io_path_stat": false, 00:20:49.859 "allow_accel_sequence": false, 00:20:49.859 "rdma_max_cq_size": 0, 00:20:49.859 "rdma_cm_event_timeout_ms": 0, 00:20:49.859 "dhchap_digests": [ 00:20:49.859 "sha256", 00:20:49.859 "sha384", 00:20:49.859 "sha512" 00:20:49.859 ], 00:20:49.859 "dhchap_dhgroups": [ 00:20:49.859 "null", 00:20:49.859 "ffdhe2048", 00:20:49.859 "ffdhe3072", 00:20:49.859 "ffdhe4096", 00:20:49.859 "ffdhe6144", 00:20:49.859 "ffdhe8192" 00:20:49.859 ] 00:20:49.859 } 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "method": "bdev_nvme_set_hotplug", 00:20:49.859 "params": { 00:20:49.859 "period_us": 100000, 00:20:49.859 "enable": false 00:20:49.859 } 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "method": "bdev_malloc_create", 00:20:49.859 "params": { 00:20:49.859 "name": "malloc0", 00:20:49.859 "num_blocks": 8192, 00:20:49.859 "block_size": 4096, 00:20:49.859 "physical_block_size": 4096, 00:20:49.859 "uuid": "2e190962-3750-49de-b675-4f866aa775bd", 00:20:49.859 "optimal_io_boundary": 0, 00:20:49.859 "md_size": 0, 00:20:49.859 "dif_type": 0, 00:20:49.859 "dif_is_head_of_md": false, 00:20:49.859 "dif_pi_format": 0 00:20:49.859 } 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "method": "bdev_wait_for_examine" 00:20:49.859 } 00:20:49.859 ] 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "subsystem": "nbd", 00:20:49.859 "config": [] 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "subsystem": "scheduler", 00:20:49.859 "config": [ 00:20:49.859 { 00:20:49.859 "method": "framework_set_scheduler", 00:20:49.859 "params": { 00:20:49.859 "name": "static" 00:20:49.859 } 00:20:49.859 } 00:20:49.859 ] 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "subsystem": "nvmf", 00:20:49.859 "config": [ 00:20:49.859 { 00:20:49.859 "method": "nvmf_set_config", 00:20:49.859 "params": { 00:20:49.859 "discovery_filter": "match_any", 00:20:49.859 "admin_cmd_passthru": { 00:20:49.859 "identify_ctrlr": false 00:20:49.859 } 00:20:49.859 } 00:20:49.859 }, 00:20:49.859 { 00:20:49.859 "method": "nvmf_set_max_subsystems", 00:20:49.859 "params": { 00:20:49.860 "max_subsystems": 1024 00:20:49.860 } 00:20:49.860 }, 00:20:49.860 { 00:20:49.860 "method": "nvmf_set_crdt", 00:20:49.860 "params": { 00:20:49.860 "crdt1": 0, 00:20:49.860 "crdt2": 0, 00:20:49.860 "crdt3": 0 00:20:49.860 } 00:20:49.860 }, 00:20:49.860 { 00:20:49.860 "method": "nvmf_create_transport", 00:20:49.860 "params": { 00:20:49.860 "trtype": "TCP", 00:20:49.860 "max_queue_depth": 128, 00:20:49.860 "max_io_qpairs_per_ctrlr": 127, 00:20:49.860 "in_capsule_data_size": 4096, 00:20:49.860 "max_io_size": 131072, 00:20:49.860 "io_unit_size": 131072, 00:20:49.860 "max_aq_depth": 128, 00:20:49.860 "num_shared_buffers": 511, 00:20:49.860 "buf_cache_size": 4294967295, 00:20:49.860 "dif_insert_or_strip": false, 00:20:49.860 "zcopy": false, 00:20:49.860 "c2h_success": false, 00:20:49.860 "sock_priority": 0, 00:20:49.860 "abort_timeout_sec": 1, 00:20:49.860 "ack_timeout": 0, 00:20:49.860 "data_wr_pool_size": 0 00:20:49.860 } 00:20:49.860 }, 00:20:49.860 { 00:20:49.860 "method": "nvmf_create_subsystem", 00:20:49.860 "params": { 00:20:49.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.860 "allow_any_host": false, 00:20:49.860 "serial_number": "SPDK00000000000001", 00:20:49.860 "model_number": "SPDK bdev Controller", 00:20:49.860 "max_namespaces": 10, 00:20:49.860 "min_cntlid": 1, 00:20:49.860 "max_cntlid": 65519, 00:20:49.860 "ana_reporting": false 00:20:49.860 } 00:20:49.860 }, 00:20:49.860 { 00:20:49.860 "method": "nvmf_subsystem_add_host", 00:20:49.860 "params": { 00:20:49.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.860 "host": "nqn.2016-06.io.spdk:host1", 00:20:49.860 "psk": "/tmp/tmp.p1UQ7icB8o" 00:20:49.860 } 00:20:49.860 }, 00:20:49.860 { 00:20:49.860 "method": "nvmf_subsystem_add_ns", 00:20:49.860 "params": { 00:20:49.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.860 "namespace": { 00:20:49.860 "nsid": 1, 00:20:49.860 "bdev_name": "malloc0", 00:20:49.860 "nguid": "2E190962375049DEB6754F866AA775BD", 00:20:49.860 "uuid": "2e190962-3750-49de-b675-4f866aa775bd", 00:20:49.860 "no_auto_visible": false 00:20:49.860 } 00:20:49.860 } 00:20:49.860 }, 00:20:49.860 { 00:20:49.860 "method": "nvmf_subsystem_add_listener", 00:20:49.860 "params": { 00:20:49.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.860 "listen_address": { 00:20:49.860 "trtype": "TCP", 00:20:49.860 "adrfam": "IPv4", 00:20:49.860 "traddr": "10.0.0.2", 00:20:49.860 "trsvcid": "4420" 00:20:49.860 }, 00:20:49.860 "secure_channel": true 00:20:49.860 } 00:20:49.860 } 00:20:49.860 ] 00:20:49.860 } 00:20:49.860 ] 00:20:49.860 }' 00:20:49.860 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.860 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:49.860 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3924438 00:20:49.860 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3924438 00:20:49.860 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3924438 ']' 00:20:49.860 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.860 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:49.860 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.860 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:49.860 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.860 [2024-07-25 10:35:53.485443] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:20:49.860 [2024-07-25 10:35:53.485492] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.860 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.860 [2024-07-25 10:35:53.558319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.121 [2024-07-25 10:35:53.632231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.121 [2024-07-25 10:35:53.632268] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.121 [2024-07-25 10:35:53.632278] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.121 [2024-07-25 10:35:53.632286] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.121 [2024-07-25 10:35:53.632294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.121 [2024-07-25 10:35:53.632349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.379 [2024-07-25 10:35:53.834989] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.379 [2024-07-25 10:35:53.856792] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:50.379 [2024-07-25 10:35:53.872835] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:50.379 [2024-07-25 10:35:53.873015] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.638 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:50.638 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:50.638 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:50.638 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:50.638 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.897 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.897 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3924554 00:20:50.897 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3924554 /var/tmp/bdevperf.sock 00:20:50.897 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3924554 ']' 00:20:50.897 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.897 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:50.897 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:50.897 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.897 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:50.897 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:50.897 "subsystems": [ 00:20:50.897 { 00:20:50.897 "subsystem": "keyring", 00:20:50.897 "config": [] 00:20:50.897 }, 00:20:50.897 { 00:20:50.897 "subsystem": "iobuf", 00:20:50.897 "config": [ 00:20:50.897 { 00:20:50.897 "method": "iobuf_set_options", 00:20:50.897 "params": { 00:20:50.897 "small_pool_count": 8192, 00:20:50.897 "large_pool_count": 1024, 00:20:50.897 "small_bufsize": 8192, 00:20:50.897 "large_bufsize": 135168 00:20:50.897 } 00:20:50.897 } 00:20:50.897 ] 00:20:50.897 }, 00:20:50.897 { 00:20:50.897 "subsystem": "sock", 00:20:50.897 "config": [ 00:20:50.897 { 00:20:50.897 "method": "sock_set_default_impl", 00:20:50.897 "params": { 00:20:50.897 "impl_name": "posix" 00:20:50.897 } 00:20:50.897 }, 00:20:50.897 { 00:20:50.897 "method": "sock_impl_set_options", 00:20:50.897 "params": { 00:20:50.897 "impl_name": "ssl", 00:20:50.897 "recv_buf_size": 4096, 00:20:50.897 "send_buf_size": 4096, 00:20:50.897 "enable_recv_pipe": true, 00:20:50.897 "enable_quickack": false, 00:20:50.897 "enable_placement_id": 0, 00:20:50.897 "enable_zerocopy_send_server": true, 00:20:50.897 "enable_zerocopy_send_client": false, 00:20:50.897 "zerocopy_threshold": 0, 00:20:50.897 "tls_version": 0, 00:20:50.897 "enable_ktls": false 00:20:50.897 } 00:20:50.897 }, 00:20:50.897 { 00:20:50.897 "method": "sock_impl_set_options", 00:20:50.897 "params": { 00:20:50.897 "impl_name": "posix", 00:20:50.897 "recv_buf_size": 2097152, 00:20:50.897 "send_buf_size": 2097152, 00:20:50.897 "enable_recv_pipe": true, 00:20:50.897 "enable_quickack": false, 00:20:50.897 "enable_placement_id": 0, 00:20:50.897 "enable_zerocopy_send_server": true, 00:20:50.897 "enable_zerocopy_send_client": false, 00:20:50.897 "zerocopy_threshold": 0, 00:20:50.897 "tls_version": 0, 00:20:50.897 "enable_ktls": false 00:20:50.897 } 00:20:50.897 } 00:20:50.897 ] 00:20:50.897 }, 00:20:50.897 { 00:20:50.897 "subsystem": "vmd", 00:20:50.897 "config": [] 00:20:50.897 }, 00:20:50.897 { 00:20:50.897 "subsystem": "accel", 00:20:50.897 "config": [ 00:20:50.897 { 00:20:50.897 "method": "accel_set_options", 00:20:50.897 "params": { 00:20:50.897 "small_cache_size": 128, 00:20:50.897 "large_cache_size": 16, 00:20:50.897 "task_count": 2048, 00:20:50.897 "sequence_count": 2048, 00:20:50.897 "buf_count": 2048 00:20:50.897 } 00:20:50.897 } 00:20:50.897 ] 00:20:50.897 }, 00:20:50.897 { 00:20:50.897 "subsystem": "bdev", 00:20:50.897 "config": [ 00:20:50.897 { 00:20:50.897 "method": "bdev_set_options", 00:20:50.897 "params": { 00:20:50.897 "bdev_io_pool_size": 65535, 00:20:50.897 "bdev_io_cache_size": 256, 00:20:50.897 "bdev_auto_examine": true, 00:20:50.897 "iobuf_small_cache_size": 128, 00:20:50.897 "iobuf_large_cache_size": 16 00:20:50.897 } 00:20:50.897 }, 00:20:50.897 { 00:20:50.897 "method": "bdev_raid_set_options", 00:20:50.897 "params": { 00:20:50.897 "process_window_size_kb": 1024, 00:20:50.897 "process_max_bandwidth_mb_sec": 0 00:20:50.897 } 00:20:50.897 }, 00:20:50.897 { 00:20:50.897 "method": "bdev_iscsi_set_options", 00:20:50.897 "params": { 00:20:50.897 "timeout_sec": 30 00:20:50.897 } 00:20:50.897 }, 00:20:50.897 { 00:20:50.897 "method": "bdev_nvme_set_options", 00:20:50.897 "params": { 00:20:50.897 "action_on_timeout": "none", 00:20:50.897 "timeout_us": 0, 00:20:50.897 "timeout_admin_us": 0, 00:20:50.897 "keep_alive_timeout_ms": 10000, 00:20:50.897 "arbitration_burst": 0, 00:20:50.897 "low_priority_weight": 0, 00:20:50.897 "medium_priority_weight": 0, 00:20:50.897 "high_priority_weight": 0, 00:20:50.897 "nvme_adminq_poll_period_us": 10000, 00:20:50.897 "nvme_ioq_poll_period_us": 0, 00:20:50.897 "io_queue_requests": 512, 00:20:50.897 "delay_cmd_submit": true, 00:20:50.897 "transport_retry_count": 4, 00:20:50.897 "bdev_retry_count": 3, 00:20:50.897 "transport_ack_timeout": 0, 00:20:50.897 "ctrlr_loss_timeout_sec": 0, 00:20:50.897 "reconnect_delay_sec": 0, 00:20:50.897 "fast_io_fail_timeout_sec": 0, 00:20:50.897 "disable_auto_failback": false, 00:20:50.897 "generate_uuids": false, 00:20:50.897 "transport_tos": 0, 00:20:50.897 "nvme_error_stat": false, 00:20:50.897 "rdma_srq_size": 0, 00:20:50.897 "io_path_stat": false, 00:20:50.897 "allow_accel_sequence": false, 00:20:50.897 "rdma_max_cq_size": 0, 00:20:50.897 "rdma_cm_event_timeout_ms": 0, 00:20:50.897 "dhchap_digests": [ 00:20:50.897 "sha256", 00:20:50.897 "sha384", 00:20:50.897 "sha512" 00:20:50.897 ], 00:20:50.897 "dhchap_dhgroups": [ 00:20:50.897 "null", 00:20:50.897 "ffdhe2048", 00:20:50.897 "ffdhe3072", 00:20:50.897 "ffdhe4096", 00:20:50.897 "ffdhe6144", 00:20:50.897 "ffdhe8192" 00:20:50.897 ] 00:20:50.897 } 00:20:50.897 }, 00:20:50.897 { 00:20:50.897 "method": "bdev_nvme_attach_controller", 00:20:50.897 "params": { 00:20:50.897 "name": "TLSTEST", 00:20:50.897 "trtype": "TCP", 00:20:50.897 "adrfam": "IPv4", 00:20:50.897 "traddr": "10.0.0.2", 00:20:50.897 "trsvcid": "4420", 00:20:50.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.897 "prchk_reftag": false, 00:20:50.897 "prchk_guard": false, 00:20:50.897 "ctrlr_loss_timeout_sec": 0, 00:20:50.897 "reconnect_delay_sec": 0, 00:20:50.897 "fast_io_fail_timeout_sec": 0, 00:20:50.897 "psk": "/tmp/tmp.p1UQ7icB8o", 00:20:50.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:50.898 "hdgst": false, 00:20:50.898 "ddgst": false 00:20:50.898 } 00:20:50.898 }, 00:20:50.898 { 00:20:50.898 "method": "bdev_nvme_set_hotplug", 00:20:50.898 "params": { 00:20:50.898 "period_us": 100000, 00:20:50.898 "enable": false 00:20:50.898 } 00:20:50.898 }, 00:20:50.898 { 00:20:50.898 "method": "bdev_wait_for_examine" 00:20:50.898 } 00:20:50.898 ] 00:20:50.898 }, 00:20:50.898 { 00:20:50.898 "subsystem": "nbd", 00:20:50.898 "config": [] 00:20:50.898 } 00:20:50.898 ] 00:20:50.898 }' 00:20:50.898 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.898 [2024-07-25 10:35:54.395256] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:20:50.898 [2024-07-25 10:35:54.395310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3924554 ] 00:20:50.898 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.898 [2024-07-25 10:35:54.461312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.898 [2024-07-25 10:35:54.536135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.157 [2024-07-25 10:35:54.677666] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.157 [2024-07-25 10:35:54.677752] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:51.724 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:51.724 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:51.724 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:51.724 Running I/O for 10 seconds... 00:21:01.705 00:21:01.705 Latency(us) 00:21:01.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.705 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:01.705 Verification LBA range: start 0x0 length 0x2000 00:21:01.705 TLSTESTn1 : 10.02 4752.44 18.56 0.00 0.00 26884.36 4797.24 54945.38 00:21:01.705 =================================================================================================================== 00:21:01.705 Total : 4752.44 18.56 0.00 0.00 26884.36 4797.24 54945.38 00:21:01.705 0 00:21:01.705 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:01.705 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 3924554 00:21:01.705 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3924554 ']' 00:21:01.705 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3924554 00:21:01.705 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:01.705 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:01.705 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3924554 00:21:01.705 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:01.705 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:01.705 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3924554' 00:21:01.705 killing process with pid 3924554 00:21:01.705 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3924554 00:21:01.705 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.705 00:21:01.705 Latency(us) 00:21:01.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.705 =================================================================================================================== 00:21:01.705 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:01.705 [2024-07-25 10:36:05.397828] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:01.705 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3924554 00:21:01.964 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 3924438 00:21:01.965 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3924438 ']' 00:21:01.965 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3924438 00:21:01.965 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:01.965 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:01.965 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3924438 00:21:01.965 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:01.965 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:01.965 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3924438' 00:21:01.965 killing process with pid 3924438 00:21:01.965 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3924438 00:21:01.965 [2024-07-25 10:36:05.634421] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:01.965 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3924438 00:21:02.224 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:02.224 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.224 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:02.224 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.224 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3927019 00:21:02.224 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:02.224 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3927019 00:21:02.224 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3927019 ']' 00:21:02.224 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.224 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:02.224 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.224 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:02.224 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.224 [2024-07-25 10:36:05.885713] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:21:02.224 [2024-07-25 10:36:05.885773] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.224 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.482 [2024-07-25 10:36:05.959898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.482 [2024-07-25 10:36:06.029703] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.482 [2024-07-25 10:36:06.029750] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.482 [2024-07-25 10:36:06.029760] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.482 [2024-07-25 10:36:06.029769] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.482 [2024-07-25 10:36:06.029777] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.482 [2024-07-25 10:36:06.029798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.048 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:03.048 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:03.048 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:03.048 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:03.048 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.048 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.048 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.p1UQ7icB8o 00:21:03.048 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.p1UQ7icB8o 00:21:03.048 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:03.366 [2024-07-25 10:36:06.876601] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.366 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:03.624 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:03.624 [2024-07-25 10:36:07.205432] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:03.624 [2024-07-25 10:36:07.205634] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.624 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:03.881 malloc0 00:21:03.881 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:03.881 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p1UQ7icB8o 00:21:04.139 [2024-07-25 10:36:07.715221] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:04.139 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3927468 00:21:04.139 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:04.139 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:04.139 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3927468 /var/tmp/bdevperf.sock 00:21:04.139 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3927468 ']' 00:21:04.139 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.139 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:04.139 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.139 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:04.139 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.139 [2024-07-25 10:36:07.768138] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:21:04.139 [2024-07-25 10:36:07.768187] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927468 ] 00:21:04.139 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.139 [2024-07-25 10:36:07.839012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.396 [2024-07-25 10:36:07.914746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.961 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:04.961 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:04.961 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p1UQ7icB8o 00:21:05.219 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:05.219 [2024-07-25 10:36:08.897990] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.476 nvme0n1 00:21:05.476 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:05.476 Running I/O for 1 seconds... 00:21:06.407 00:21:06.407 Latency(us) 00:21:06.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.407 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:06.407 Verification LBA range: start 0x0 length 0x2000 00:21:06.407 nvme0n1 : 1.03 4436.98 17.33 0.00 0.00 28497.33 6658.46 47185.92 00:21:06.407 =================================================================================================================== 00:21:06.407 Total : 4436.98 17.33 0.00 0.00 28497.33 6658.46 47185.92 00:21:06.407 0 00:21:06.664 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 3927468 00:21:06.664 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3927468 ']' 00:21:06.664 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3927468 00:21:06.664 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:06.664 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:06.664 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3927468 00:21:06.664 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:06.664 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:06.664 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3927468' 00:21:06.664 killing process with pid 3927468 00:21:06.664 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3927468 00:21:06.664 Received shutdown signal, test time was about 1.000000 seconds 00:21:06.664 00:21:06.664 Latency(us) 00:21:06.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.664 =================================================================================================================== 00:21:06.664 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:06.664 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3927468 00:21:06.664 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 3927019 00:21:06.664 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3927019 ']' 00:21:06.664 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3927019 00:21:06.664 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3927019 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3927019' 00:21:06.922 killing process with pid 3927019 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3927019 00:21:06.922 [2024-07-25 10:36:10.415965] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3927019 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3927874 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3927874 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3927874 ']' 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:06.922 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.180 [2024-07-25 10:36:10.664289] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:21:07.180 [2024-07-25 10:36:10.664340] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.180 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.180 [2024-07-25 10:36:10.736731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.180 [2024-07-25 10:36:10.810833] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.180 [2024-07-25 10:36:10.810870] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.180 [2024-07-25 10:36:10.810879] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.180 [2024-07-25 10:36:10.810903] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.180 [2024-07-25 10:36:10.810910] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.180 [2024-07-25 10:36:10.810931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.115 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:08.115 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:08.115 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.115 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:08.115 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.115 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.115 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:08.115 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.115 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.116 [2024-07-25 10:36:11.516847] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.116 malloc0 00:21:08.116 [2024-07-25 10:36:11.545342] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:08.116 [2024-07-25 10:36:11.553850] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.116 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.116 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=3928076 00:21:08.116 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 3928076 /var/tmp/bdevperf.sock 00:21:08.116 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:08.116 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3928076 ']' 00:21:08.116 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:08.116 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:08.116 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:08.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:08.116 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:08.116 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.116 [2024-07-25 10:36:11.626620] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:21:08.116 [2024-07-25 10:36:11.626665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928076 ] 00:21:08.116 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.116 [2024-07-25 10:36:11.696318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.116 [2024-07-25 10:36:11.770717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.048 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:09.048 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:09.048 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p1UQ7icB8o 00:21:09.048 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:09.048 [2024-07-25 10:36:12.746195] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:09.306 nvme0n1 00:21:09.306 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:09.306 Running I/O for 1 seconds... 00:21:10.237 00:21:10.237 Latency(us) 00:21:10.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.237 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:10.237 Verification LBA range: start 0x0 length 0x2000 00:21:10.237 nvme0n1 : 1.03 4038.62 15.78 0.00 0.00 31316.03 4639.95 104857.60 00:21:10.237 =================================================================================================================== 00:21:10.237 Total : 4038.62 15.78 0.00 0.00 31316.03 4639.95 104857.60 00:21:10.237 0 00:21:10.494 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:10.494 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.494 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.494 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.494 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:10.494 "subsystems": [ 00:21:10.494 { 00:21:10.494 "subsystem": "keyring", 00:21:10.494 "config": [ 00:21:10.494 { 00:21:10.494 "method": "keyring_file_add_key", 00:21:10.494 "params": { 00:21:10.494 "name": "key0", 00:21:10.494 "path": "/tmp/tmp.p1UQ7icB8o" 00:21:10.494 } 00:21:10.494 } 00:21:10.494 ] 00:21:10.494 }, 00:21:10.494 { 00:21:10.494 "subsystem": "iobuf", 00:21:10.494 "config": [ 00:21:10.494 { 00:21:10.494 "method": "iobuf_set_options", 00:21:10.494 "params": { 00:21:10.494 "small_pool_count": 8192, 00:21:10.494 "large_pool_count": 1024, 00:21:10.494 "small_bufsize": 8192, 00:21:10.494 "large_bufsize": 135168 00:21:10.494 } 00:21:10.494 } 00:21:10.494 ] 00:21:10.494 }, 00:21:10.494 { 00:21:10.494 "subsystem": "sock", 00:21:10.494 "config": [ 00:21:10.494 { 00:21:10.494 "method": "sock_set_default_impl", 00:21:10.494 "params": { 00:21:10.494 "impl_name": "posix" 00:21:10.494 } 00:21:10.494 }, 00:21:10.494 { 00:21:10.494 "method": "sock_impl_set_options", 00:21:10.494 "params": { 00:21:10.494 "impl_name": "ssl", 00:21:10.494 "recv_buf_size": 4096, 00:21:10.494 "send_buf_size": 4096, 00:21:10.494 "enable_recv_pipe": true, 00:21:10.494 "enable_quickack": false, 00:21:10.494 "enable_placement_id": 0, 00:21:10.494 "enable_zerocopy_send_server": true, 00:21:10.494 "enable_zerocopy_send_client": false, 00:21:10.494 "zerocopy_threshold": 0, 00:21:10.494 "tls_version": 0, 00:21:10.494 "enable_ktls": false 00:21:10.494 } 00:21:10.494 }, 00:21:10.494 { 00:21:10.495 "method": "sock_impl_set_options", 00:21:10.495 "params": { 00:21:10.495 "impl_name": "posix", 00:21:10.495 "recv_buf_size": 2097152, 00:21:10.495 "send_buf_size": 2097152, 00:21:10.495 "enable_recv_pipe": true, 00:21:10.495 "enable_quickack": false, 00:21:10.495 "enable_placement_id": 0, 00:21:10.495 "enable_zerocopy_send_server": true, 00:21:10.495 "enable_zerocopy_send_client": false, 00:21:10.495 "zerocopy_threshold": 0, 00:21:10.495 "tls_version": 0, 00:21:10.495 "enable_ktls": false 00:21:10.495 } 00:21:10.495 } 00:21:10.495 ] 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "subsystem": "vmd", 00:21:10.495 "config": [] 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "subsystem": "accel", 00:21:10.495 "config": [ 00:21:10.495 { 00:21:10.495 "method": "accel_set_options", 00:21:10.495 "params": { 00:21:10.495 "small_cache_size": 128, 00:21:10.495 "large_cache_size": 16, 00:21:10.495 "task_count": 2048, 00:21:10.495 "sequence_count": 2048, 00:21:10.495 "buf_count": 2048 00:21:10.495 } 00:21:10.495 } 00:21:10.495 ] 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "subsystem": "bdev", 00:21:10.495 "config": [ 00:21:10.495 { 00:21:10.495 "method": "bdev_set_options", 00:21:10.495 "params": { 00:21:10.495 "bdev_io_pool_size": 65535, 00:21:10.495 "bdev_io_cache_size": 256, 00:21:10.495 "bdev_auto_examine": true, 00:21:10.495 "iobuf_small_cache_size": 128, 00:21:10.495 "iobuf_large_cache_size": 16 00:21:10.495 } 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "method": "bdev_raid_set_options", 00:21:10.495 "params": { 00:21:10.495 "process_window_size_kb": 1024, 00:21:10.495 "process_max_bandwidth_mb_sec": 0 00:21:10.495 } 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "method": "bdev_iscsi_set_options", 00:21:10.495 "params": { 00:21:10.495 "timeout_sec": 30 00:21:10.495 } 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "method": "bdev_nvme_set_options", 00:21:10.495 "params": { 00:21:10.495 "action_on_timeout": "none", 00:21:10.495 "timeout_us": 0, 00:21:10.495 "timeout_admin_us": 0, 00:21:10.495 "keep_alive_timeout_ms": 10000, 00:21:10.495 "arbitration_burst": 0, 00:21:10.495 "low_priority_weight": 0, 00:21:10.495 "medium_priority_weight": 0, 00:21:10.495 "high_priority_weight": 0, 00:21:10.495 "nvme_adminq_poll_period_us": 10000, 00:21:10.495 "nvme_ioq_poll_period_us": 0, 00:21:10.495 "io_queue_requests": 0, 00:21:10.495 "delay_cmd_submit": true, 00:21:10.495 "transport_retry_count": 4, 00:21:10.495 "bdev_retry_count": 3, 00:21:10.495 "transport_ack_timeout": 0, 00:21:10.495 "ctrlr_loss_timeout_sec": 0, 00:21:10.495 "reconnect_delay_sec": 0, 00:21:10.495 "fast_io_fail_timeout_sec": 0, 00:21:10.495 "disable_auto_failback": false, 00:21:10.495 "generate_uuids": false, 00:21:10.495 "transport_tos": 0, 00:21:10.495 "nvme_error_stat": false, 00:21:10.495 "rdma_srq_size": 0, 00:21:10.495 "io_path_stat": false, 00:21:10.495 "allow_accel_sequence": false, 00:21:10.495 "rdma_max_cq_size": 0, 00:21:10.495 "rdma_cm_event_timeout_ms": 0, 00:21:10.495 "dhchap_digests": [ 00:21:10.495 "sha256", 00:21:10.495 "sha384", 00:21:10.495 "sha512" 00:21:10.495 ], 00:21:10.495 "dhchap_dhgroups": [ 00:21:10.495 "null", 00:21:10.495 "ffdhe2048", 00:21:10.495 "ffdhe3072", 00:21:10.495 "ffdhe4096", 00:21:10.495 "ffdhe6144", 00:21:10.495 "ffdhe8192" 00:21:10.495 ] 00:21:10.495 } 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "method": "bdev_nvme_set_hotplug", 00:21:10.495 "params": { 00:21:10.495 "period_us": 100000, 00:21:10.495 "enable": false 00:21:10.495 } 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "method": "bdev_malloc_create", 00:21:10.495 "params": { 00:21:10.495 "name": "malloc0", 00:21:10.495 "num_blocks": 8192, 00:21:10.495 "block_size": 4096, 00:21:10.495 "physical_block_size": 4096, 00:21:10.495 "uuid": "08975aba-ea97-4e7f-abbe-5fde7b8a1f4e", 00:21:10.495 "optimal_io_boundary": 0, 00:21:10.495 "md_size": 0, 00:21:10.495 "dif_type": 0, 00:21:10.495 "dif_is_head_of_md": false, 00:21:10.495 "dif_pi_format": 0 00:21:10.495 } 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "method": "bdev_wait_for_examine" 00:21:10.495 } 00:21:10.495 ] 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "subsystem": "nbd", 00:21:10.495 "config": [] 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "subsystem": "scheduler", 00:21:10.495 "config": [ 00:21:10.495 { 00:21:10.495 "method": "framework_set_scheduler", 00:21:10.495 "params": { 00:21:10.495 "name": "static" 00:21:10.495 } 00:21:10.495 } 00:21:10.495 ] 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "subsystem": "nvmf", 00:21:10.495 "config": [ 00:21:10.495 { 00:21:10.495 "method": "nvmf_set_config", 00:21:10.495 "params": { 00:21:10.495 "discovery_filter": "match_any", 00:21:10.495 "admin_cmd_passthru": { 00:21:10.495 "identify_ctrlr": false 00:21:10.495 } 00:21:10.495 } 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "method": "nvmf_set_max_subsystems", 00:21:10.495 "params": { 00:21:10.495 "max_subsystems": 1024 00:21:10.495 } 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "method": "nvmf_set_crdt", 00:21:10.495 "params": { 00:21:10.495 "crdt1": 0, 00:21:10.495 "crdt2": 0, 00:21:10.495 "crdt3": 0 00:21:10.495 } 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "method": "nvmf_create_transport", 00:21:10.495 "params": { 00:21:10.495 "trtype": "TCP", 00:21:10.495 "max_queue_depth": 128, 00:21:10.495 "max_io_qpairs_per_ctrlr": 127, 00:21:10.495 "in_capsule_data_size": 4096, 00:21:10.495 "max_io_size": 131072, 00:21:10.495 "io_unit_size": 131072, 00:21:10.495 "max_aq_depth": 128, 00:21:10.495 "num_shared_buffers": 511, 00:21:10.495 "buf_cache_size": 4294967295, 00:21:10.495 "dif_insert_or_strip": false, 00:21:10.495 "zcopy": false, 00:21:10.495 "c2h_success": false, 00:21:10.495 "sock_priority": 0, 00:21:10.495 "abort_timeout_sec": 1, 00:21:10.495 "ack_timeout": 0, 00:21:10.495 "data_wr_pool_size": 0 00:21:10.495 } 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "method": "nvmf_create_subsystem", 00:21:10.495 "params": { 00:21:10.495 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.495 "allow_any_host": false, 00:21:10.495 "serial_number": "00000000000000000000", 00:21:10.495 "model_number": "SPDK bdev Controller", 00:21:10.495 "max_namespaces": 32, 00:21:10.495 "min_cntlid": 1, 00:21:10.495 "max_cntlid": 65519, 00:21:10.495 "ana_reporting": false 00:21:10.495 } 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "method": "nvmf_subsystem_add_host", 00:21:10.495 "params": { 00:21:10.495 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.495 "host": "nqn.2016-06.io.spdk:host1", 00:21:10.495 "psk": "key0" 00:21:10.495 } 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "method": "nvmf_subsystem_add_ns", 00:21:10.495 "params": { 00:21:10.495 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.495 "namespace": { 00:21:10.495 "nsid": 1, 00:21:10.495 "bdev_name": "malloc0", 00:21:10.495 "nguid": "08975ABAEA974E7FABBE5FDE7B8A1F4E", 00:21:10.495 "uuid": "08975aba-ea97-4e7f-abbe-5fde7b8a1f4e", 00:21:10.495 "no_auto_visible": false 00:21:10.495 } 00:21:10.495 } 00:21:10.495 }, 00:21:10.495 { 00:21:10.495 "method": "nvmf_subsystem_add_listener", 00:21:10.495 "params": { 00:21:10.495 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.495 "listen_address": { 00:21:10.495 "trtype": "TCP", 00:21:10.495 "adrfam": "IPv4", 00:21:10.495 "traddr": "10.0.0.2", 00:21:10.495 "trsvcid": "4420" 00:21:10.495 }, 00:21:10.495 "secure_channel": false, 00:21:10.495 "sock_impl": "ssl" 00:21:10.495 } 00:21:10.495 } 00:21:10.495 ] 00:21:10.495 } 00:21:10.495 ] 00:21:10.495 }' 00:21:10.495 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:10.754 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:10.754 "subsystems": [ 00:21:10.754 { 00:21:10.754 "subsystem": "keyring", 00:21:10.754 "config": [ 00:21:10.754 { 00:21:10.754 "method": "keyring_file_add_key", 00:21:10.754 "params": { 00:21:10.754 "name": "key0", 00:21:10.754 "path": "/tmp/tmp.p1UQ7icB8o" 00:21:10.754 } 00:21:10.754 } 00:21:10.754 ] 00:21:10.754 }, 00:21:10.754 { 00:21:10.754 "subsystem": "iobuf", 00:21:10.754 "config": [ 00:21:10.754 { 00:21:10.754 "method": "iobuf_set_options", 00:21:10.754 "params": { 00:21:10.754 "small_pool_count": 8192, 00:21:10.754 "large_pool_count": 1024, 00:21:10.754 "small_bufsize": 8192, 00:21:10.754 "large_bufsize": 135168 00:21:10.754 } 00:21:10.754 } 00:21:10.754 ] 00:21:10.754 }, 00:21:10.754 { 00:21:10.754 "subsystem": "sock", 00:21:10.754 "config": [ 00:21:10.754 { 00:21:10.754 "method": "sock_set_default_impl", 00:21:10.754 "params": { 00:21:10.754 "impl_name": "posix" 00:21:10.754 } 00:21:10.754 }, 00:21:10.754 { 00:21:10.754 "method": "sock_impl_set_options", 00:21:10.754 "params": { 00:21:10.754 "impl_name": "ssl", 00:21:10.754 "recv_buf_size": 4096, 00:21:10.754 "send_buf_size": 4096, 00:21:10.754 "enable_recv_pipe": true, 00:21:10.754 "enable_quickack": false, 00:21:10.754 "enable_placement_id": 0, 00:21:10.754 "enable_zerocopy_send_server": true, 00:21:10.754 "enable_zerocopy_send_client": false, 00:21:10.754 "zerocopy_threshold": 0, 00:21:10.754 "tls_version": 0, 00:21:10.754 "enable_ktls": false 00:21:10.754 } 00:21:10.754 }, 00:21:10.754 { 00:21:10.754 "method": "sock_impl_set_options", 00:21:10.754 "params": { 00:21:10.754 "impl_name": "posix", 00:21:10.754 "recv_buf_size": 2097152, 00:21:10.754 "send_buf_size": 2097152, 00:21:10.754 "enable_recv_pipe": true, 00:21:10.754 "enable_quickack": false, 00:21:10.754 "enable_placement_id": 0, 00:21:10.754 "enable_zerocopy_send_server": true, 00:21:10.754 "enable_zerocopy_send_client": false, 00:21:10.754 "zerocopy_threshold": 0, 00:21:10.754 "tls_version": 0, 00:21:10.754 "enable_ktls": false 00:21:10.754 } 00:21:10.754 } 00:21:10.754 ] 00:21:10.754 }, 00:21:10.754 { 00:21:10.754 "subsystem": "vmd", 00:21:10.754 "config": [] 00:21:10.754 }, 00:21:10.754 { 00:21:10.754 "subsystem": "accel", 00:21:10.754 "config": [ 00:21:10.754 { 00:21:10.754 "method": "accel_set_options", 00:21:10.754 "params": { 00:21:10.754 "small_cache_size": 128, 00:21:10.754 "large_cache_size": 16, 00:21:10.754 "task_count": 2048, 00:21:10.754 "sequence_count": 2048, 00:21:10.754 "buf_count": 2048 00:21:10.754 } 00:21:10.754 } 00:21:10.754 ] 00:21:10.754 }, 00:21:10.754 { 00:21:10.754 "subsystem": "bdev", 00:21:10.754 "config": [ 00:21:10.754 { 00:21:10.754 "method": "bdev_set_options", 00:21:10.754 "params": { 00:21:10.754 "bdev_io_pool_size": 65535, 00:21:10.754 "bdev_io_cache_size": 256, 00:21:10.754 "bdev_auto_examine": true, 00:21:10.754 "iobuf_small_cache_size": 128, 00:21:10.754 "iobuf_large_cache_size": 16 00:21:10.754 } 00:21:10.754 }, 00:21:10.754 { 00:21:10.754 "method": "bdev_raid_set_options", 00:21:10.754 "params": { 00:21:10.754 "process_window_size_kb": 1024, 00:21:10.754 "process_max_bandwidth_mb_sec": 0 00:21:10.754 } 00:21:10.754 }, 00:21:10.754 { 00:21:10.754 "method": "bdev_iscsi_set_options", 00:21:10.754 "params": { 00:21:10.754 "timeout_sec": 30 00:21:10.754 } 00:21:10.754 }, 00:21:10.754 { 00:21:10.754 "method": "bdev_nvme_set_options", 00:21:10.754 "params": { 00:21:10.754 "action_on_timeout": "none", 00:21:10.754 "timeout_us": 0, 00:21:10.754 "timeout_admin_us": 0, 00:21:10.754 "keep_alive_timeout_ms": 10000, 00:21:10.754 "arbitration_burst": 0, 00:21:10.754 "low_priority_weight": 0, 00:21:10.754 "medium_priority_weight": 0, 00:21:10.754 "high_priority_weight": 0, 00:21:10.754 "nvme_adminq_poll_period_us": 10000, 00:21:10.754 "nvme_ioq_poll_period_us": 0, 00:21:10.754 "io_queue_requests": 512, 00:21:10.754 "delay_cmd_submit": true, 00:21:10.754 "transport_retry_count": 4, 00:21:10.754 "bdev_retry_count": 3, 00:21:10.754 "transport_ack_timeout": 0, 00:21:10.754 "ctrlr_loss_timeout_sec": 0, 00:21:10.754 "reconnect_delay_sec": 0, 00:21:10.754 "fast_io_fail_timeout_sec": 0, 00:21:10.754 "disable_auto_failback": false, 00:21:10.754 "generate_uuids": false, 00:21:10.754 "transport_tos": 0, 00:21:10.754 "nvme_error_stat": false, 00:21:10.754 "rdma_srq_size": 0, 00:21:10.754 "io_path_stat": false, 00:21:10.754 "allow_accel_sequence": false, 00:21:10.754 "rdma_max_cq_size": 0, 00:21:10.754 "rdma_cm_event_timeout_ms": 0, 00:21:10.754 "dhchap_digests": [ 00:21:10.754 "sha256", 00:21:10.754 "sha384", 00:21:10.754 "sha512" 00:21:10.754 ], 00:21:10.754 "dhchap_dhgroups": [ 00:21:10.754 "null", 00:21:10.754 "ffdhe2048", 00:21:10.754 "ffdhe3072", 00:21:10.754 "ffdhe4096", 00:21:10.754 "ffdhe6144", 00:21:10.754 "ffdhe8192" 00:21:10.754 ] 00:21:10.754 } 00:21:10.754 }, 00:21:10.754 { 00:21:10.754 "method": "bdev_nvme_attach_controller", 00:21:10.754 "params": { 00:21:10.754 "name": "nvme0", 00:21:10.754 "trtype": "TCP", 00:21:10.754 "adrfam": "IPv4", 00:21:10.754 "traddr": "10.0.0.2", 00:21:10.754 "trsvcid": "4420", 00:21:10.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.754 "prchk_reftag": false, 00:21:10.754 "prchk_guard": false, 00:21:10.754 "ctrlr_loss_timeout_sec": 0, 00:21:10.754 "reconnect_delay_sec": 0, 00:21:10.754 "fast_io_fail_timeout_sec": 0, 00:21:10.754 "psk": "key0", 00:21:10.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.754 "hdgst": false, 00:21:10.754 "ddgst": false 00:21:10.754 } 00:21:10.754 }, 00:21:10.754 { 00:21:10.754 "method": "bdev_nvme_set_hotplug", 00:21:10.754 "params": { 00:21:10.754 "period_us": 100000, 00:21:10.754 "enable": false 00:21:10.754 } 00:21:10.754 }, 00:21:10.754 { 00:21:10.754 "method": "bdev_enable_histogram", 00:21:10.754 "params": { 00:21:10.754 "name": "nvme0n1", 00:21:10.754 "enable": true 00:21:10.754 } 00:21:10.754 }, 00:21:10.754 { 00:21:10.754 "method": "bdev_wait_for_examine" 00:21:10.754 } 00:21:10.754 ] 00:21:10.754 }, 00:21:10.754 { 00:21:10.754 "subsystem": "nbd", 00:21:10.754 "config": [] 00:21:10.754 } 00:21:10.754 ] 00:21:10.754 }' 00:21:10.754 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 3928076 00:21:10.754 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3928076 ']' 00:21:10.754 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3928076 00:21:10.754 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:10.754 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:10.754 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3928076 00:21:10.754 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:10.754 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:10.754 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3928076' 00:21:10.754 killing process with pid 3928076 00:21:10.754 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3928076 00:21:10.754 Received shutdown signal, test time was about 1.000000 seconds 00:21:10.754 00:21:10.755 Latency(us) 00:21:10.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.755 =================================================================================================================== 00:21:10.755 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.755 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3928076 00:21:11.013 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 3927874 00:21:11.013 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3927874 ']' 00:21:11.013 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3927874 00:21:11.013 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:11.013 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.013 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3927874 00:21:11.013 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:11.013 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:11.013 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3927874' 00:21:11.013 killing process with pid 3927874 00:21:11.013 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3927874 00:21:11.013 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3927874 00:21:11.271 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:11.271 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.271 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:11.271 "subsystems": [ 00:21:11.271 { 00:21:11.271 "subsystem": "keyring", 00:21:11.271 "config": [ 00:21:11.271 { 00:21:11.271 "method": "keyring_file_add_key", 00:21:11.271 "params": { 00:21:11.271 "name": "key0", 00:21:11.271 "path": "/tmp/tmp.p1UQ7icB8o" 00:21:11.271 } 00:21:11.271 } 00:21:11.271 ] 00:21:11.271 }, 00:21:11.271 { 00:21:11.271 "subsystem": "iobuf", 00:21:11.271 "config": [ 00:21:11.271 { 00:21:11.271 "method": "iobuf_set_options", 00:21:11.271 "params": { 00:21:11.271 "small_pool_count": 8192, 00:21:11.271 "large_pool_count": 1024, 00:21:11.271 "small_bufsize": 8192, 00:21:11.271 "large_bufsize": 135168 00:21:11.271 } 00:21:11.271 } 00:21:11.271 ] 00:21:11.271 }, 00:21:11.271 { 00:21:11.271 "subsystem": "sock", 00:21:11.271 "config": [ 00:21:11.271 { 00:21:11.271 "method": "sock_set_default_impl", 00:21:11.271 "params": { 00:21:11.271 "impl_name": "posix" 00:21:11.271 } 00:21:11.271 }, 00:21:11.271 { 00:21:11.271 "method": "sock_impl_set_options", 00:21:11.271 "params": { 00:21:11.271 "impl_name": "ssl", 00:21:11.271 "recv_buf_size": 4096, 00:21:11.271 "send_buf_size": 4096, 00:21:11.271 "enable_recv_pipe": true, 00:21:11.271 "enable_quickack": false, 00:21:11.271 "enable_placement_id": 0, 00:21:11.271 "enable_zerocopy_send_server": true, 00:21:11.271 "enable_zerocopy_send_client": false, 00:21:11.271 "zerocopy_threshold": 0, 00:21:11.271 "tls_version": 0, 00:21:11.271 "enable_ktls": false 00:21:11.271 } 00:21:11.271 }, 00:21:11.271 { 00:21:11.271 "method": "sock_impl_set_options", 00:21:11.271 "params": { 00:21:11.271 "impl_name": "posix", 00:21:11.271 "recv_buf_size": 2097152, 00:21:11.271 "send_buf_size": 2097152, 00:21:11.271 "enable_recv_pipe": true, 00:21:11.271 "enable_quickack": false, 00:21:11.271 "enable_placement_id": 0, 00:21:11.271 "enable_zerocopy_send_server": true, 00:21:11.271 "enable_zerocopy_send_client": false, 00:21:11.271 "zerocopy_threshold": 0, 00:21:11.271 "tls_version": 0, 00:21:11.271 "enable_ktls": false 00:21:11.271 } 00:21:11.271 } 00:21:11.271 ] 00:21:11.271 }, 00:21:11.271 { 00:21:11.271 "subsystem": "vmd", 00:21:11.271 "config": [] 00:21:11.271 }, 00:21:11.271 { 00:21:11.271 "subsystem": "accel", 00:21:11.271 "config": [ 00:21:11.271 { 00:21:11.271 "method": "accel_set_options", 00:21:11.271 "params": { 00:21:11.271 "small_cache_size": 128, 00:21:11.271 "large_cache_size": 16, 00:21:11.271 "task_count": 2048, 00:21:11.271 "sequence_count": 2048, 00:21:11.271 "buf_count": 2048 00:21:11.271 } 00:21:11.271 } 00:21:11.271 ] 00:21:11.271 }, 00:21:11.271 { 00:21:11.271 "subsystem": "bdev", 00:21:11.271 "config": [ 00:21:11.271 { 00:21:11.271 "method": "bdev_set_options", 00:21:11.271 "params": { 00:21:11.271 "bdev_io_pool_size": 65535, 00:21:11.271 "bdev_io_cache_size": 256, 00:21:11.271 "bdev_auto_examine": true, 00:21:11.271 "iobuf_small_cache_size": 128, 00:21:11.271 "iobuf_large_cache_size": 16 00:21:11.271 } 00:21:11.271 }, 00:21:11.271 { 00:21:11.271 "method": "bdev_raid_set_options", 00:21:11.271 "params": { 00:21:11.271 "process_window_size_kb": 1024, 00:21:11.271 "process_max_bandwidth_mb_sec": 0 00:21:11.271 } 00:21:11.271 }, 00:21:11.271 { 00:21:11.271 "method": "bdev_iscsi_set_options", 00:21:11.271 "params": { 00:21:11.271 "timeout_sec": 30 00:21:11.271 } 00:21:11.271 }, 00:21:11.271 { 00:21:11.271 "method": "bdev_nvme_set_options", 00:21:11.271 "params": { 00:21:11.271 "action_on_timeout": "none", 00:21:11.271 "timeout_us": 0, 00:21:11.271 "timeout_admin_us": 0, 00:21:11.271 "keep_alive_timeout_ms": 10000, 00:21:11.272 "arbitration_burst": 0, 00:21:11.272 "low_priority_weight": 0, 00:21:11.272 "medium_priority_weight": 0, 00:21:11.272 "high_priority_weight": 0, 00:21:11.272 "nvme_adminq_poll_period_us": 10000, 00:21:11.272 "nvme_ioq_poll_period_us": 0, 00:21:11.272 "io_queue_requests": 0, 00:21:11.272 "delay_cmd_submit": true, 00:21:11.272 "transport_retry_count": 4, 00:21:11.272 "bdev_retry_count": 3, 00:21:11.272 "transport_ack_timeout": 0, 00:21:11.272 "ctrlr_loss_timeout_sec": 0, 00:21:11.272 "reconnect_delay_sec": 0, 00:21:11.272 "fast_io_fail_timeout_sec": 0, 00:21:11.272 "disable_auto_failback": false, 00:21:11.272 "generate_uuids": false, 00:21:11.272 "transport_tos": 0, 00:21:11.272 "nvme_error_stat": false, 00:21:11.272 "rdma_srq_size": 0, 00:21:11.272 "io_path_stat": false, 00:21:11.272 "allow_accel_sequence": false, 00:21:11.272 "rdma_max_cq_size": 0, 00:21:11.272 "rdma_cm_event_timeout_ms": 0, 00:21:11.272 "dhchap_digests": [ 00:21:11.272 "sha256", 00:21:11.272 "sha384", 00:21:11.272 "sha512" 00:21:11.272 ], 00:21:11.272 "dhchap_dhgroups": [ 00:21:11.272 "null", 00:21:11.272 "ffdhe2048", 00:21:11.272 "ffdhe3072", 00:21:11.272 "ffdhe4096", 00:21:11.272 "ffdhe6144", 00:21:11.272 "ffdhe8192" 00:21:11.272 ] 00:21:11.272 } 00:21:11.272 }, 00:21:11.272 { 00:21:11.272 "method": "bdev_nvme_set_hotplug", 00:21:11.272 "params": { 00:21:11.272 "period_us": 100000, 00:21:11.272 "enable": false 00:21:11.272 } 00:21:11.272 }, 00:21:11.272 { 00:21:11.272 "method": "bdev_malloc_create", 00:21:11.272 "params": { 00:21:11.272 "name": "malloc0", 00:21:11.272 "num_blocks": 8192, 00:21:11.272 "block_size": 4096, 00:21:11.272 "physical_block_size": 4096, 00:21:11.272 "uuid": "08975aba-ea97-4e7f-abbe-5fde7b8a1f4e", 00:21:11.272 "optimal_io_boundary": 0, 00:21:11.272 "md_size": 0, 00:21:11.272 "dif_type": 0, 00:21:11.272 "dif_is_head_of_md": false, 00:21:11.272 "dif_pi_format": 0 00:21:11.272 } 00:21:11.272 }, 00:21:11.272 { 00:21:11.272 "method": "bdev_wait_for_examine" 00:21:11.272 } 00:21:11.272 ] 00:21:11.272 }, 00:21:11.272 { 00:21:11.272 "subsystem": "nbd", 00:21:11.272 "config": [] 00:21:11.272 }, 00:21:11.272 { 00:21:11.272 "subsystem": "scheduler", 00:21:11.272 "config": [ 00:21:11.272 { 00:21:11.272 "method": "framework_set_scheduler", 00:21:11.272 "params": { 00:21:11.272 "name": "static" 00:21:11.272 } 00:21:11.272 } 00:21:11.272 ] 00:21:11.272 }, 00:21:11.272 { 00:21:11.272 "subsystem": "nvmf", 00:21:11.272 "config": [ 00:21:11.272 { 00:21:11.272 "method": "nvmf_set_config", 00:21:11.272 "params": { 00:21:11.272 "discovery_filter": "match_any", 00:21:11.272 "admin_cmd_passthru": { 00:21:11.272 "identify_ctrlr": false 00:21:11.272 } 00:21:11.272 } 00:21:11.272 }, 00:21:11.272 { 00:21:11.272 "method": "nvmf_set_max_subsystems", 00:21:11.272 "params": { 00:21:11.272 "max_subsystems": 1024 00:21:11.272 } 00:21:11.272 }, 00:21:11.272 { 00:21:11.272 "method": "nvmf_set_crdt", 00:21:11.272 "params": { 00:21:11.272 "crdt1": 0, 00:21:11.272 "crdt2": 0, 00:21:11.272 "crdt3": 0 00:21:11.272 } 00:21:11.272 }, 00:21:11.272 { 00:21:11.272 "method": "nvmf_create_transport", 00:21:11.272 "params": { 00:21:11.272 "trtype": "TCP", 00:21:11.272 "max_queue_depth": 128, 00:21:11.272 "max_io_qpairs_per_ctrlr": 127, 00:21:11.272 "in_capsule_data_size": 4096, 00:21:11.272 "max_io_size": 131072, 00:21:11.272 "io_unit_size": 131072, 00:21:11.272 "max_aq_depth": 128, 00:21:11.272 "num_shared_buffers": 511, 00:21:11.272 "buf_cache_size": 4294967295, 00:21:11.272 "dif_insert_or_strip": false, 00:21:11.272 "zcopy": false, 00:21:11.272 "c2h_success": false, 00:21:11.272 "sock_priority": 0, 00:21:11.272 "abort_timeout_sec": 1, 00:21:11.272 "ack_timeout": 0, 00:21:11.272 "data_wr_pool_size": 0 00:21:11.272 } 00:21:11.272 }, 00:21:11.272 { 00:21:11.272 "method": "nvmf_create_subsystem", 00:21:11.272 "params": { 00:21:11.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.272 "allow_any_host": false, 00:21:11.272 "serial_number": "00000000000000000000", 00:21:11.272 "model_number": "SPDK bdev Controller", 00:21:11.272 "max_namespaces": 32, 00:21:11.272 "min_cntlid": 1, 00:21:11.272 "max_cntlid": 65519, 00:21:11.272 "ana_reporting": false 00:21:11.272 } 00:21:11.272 }, 00:21:11.272 { 00:21:11.272 "method": "nvmf_subsystem_add_host", 00:21:11.272 "params": { 00:21:11.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.272 "host": "nqn.2016-06.io.spdk:host1", 00:21:11.272 "psk": "key0" 00:21:11.272 } 00:21:11.272 }, 00:21:11.272 { 00:21:11.272 "method": "nvmf_subsystem_add_ns", 00:21:11.272 "params": { 00:21:11.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.272 "namespace": { 00:21:11.272 "nsid": 1, 00:21:11.272 "bdev_name": "malloc0", 00:21:11.272 "nguid": "08975ABAEA974E7FABBE5FDE7B8A1F4E", 00:21:11.272 "uuid": "08975aba-ea97-4e7f-abbe-5fde7b8a1f4e", 00:21:11.272 "no_auto_visible": false 00:21:11.272 } 00:21:11.272 } 00:21:11.272 }, 00:21:11.272 { 00:21:11.272 "method": "nvmf_subsystem_add_listener", 00:21:11.272 "params": { 00:21:11.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.272 "listen_address": { 00:21:11.272 "trtype": "TCP", 00:21:11.272 "adrfam": "IPv4", 00:21:11.272 "traddr": "10.0.0.2", 00:21:11.272 "trsvcid": "4420" 00:21:11.272 }, 00:21:11.272 "secure_channel": false, 00:21:11.272 "sock_impl": "ssl" 00:21:11.272 } 00:21:11.272 } 00:21:11.272 ] 00:21:11.272 } 00:21:11.272 ] 00:21:11.272 }' 00:21:11.272 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:11.272 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.272 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:11.272 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3928623 00:21:11.272 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3928623 00:21:11.272 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3928623 ']' 00:21:11.272 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.272 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:11.272 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.272 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:11.272 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.272 [2024-07-25 10:36:14.860845] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:21:11.272 [2024-07-25 10:36:14.860894] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.272 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.272 [2024-07-25 10:36:14.932943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.531 [2024-07-25 10:36:15.006965] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.531 [2024-07-25 10:36:15.007001] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.531 [2024-07-25 10:36:15.007010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.531 [2024-07-25 10:36:15.007019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.531 [2024-07-25 10:36:15.007027] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.531 [2024-07-25 10:36:15.007078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.531 [2024-07-25 10:36:15.216381] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.789 [2024-07-25 10:36:15.260360] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:11.789 [2024-07-25 10:36:15.260553] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.048 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:12.048 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:12.048 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:12.048 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:12.048 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.048 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.048 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=3928864 00:21:12.048 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 3928864 /var/tmp/bdevperf.sock 00:21:12.048 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3928864 ']' 00:21:12.048 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:12.048 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:12.048 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:12.048 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:12.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:12.048 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:12.048 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:12.048 "subsystems": [ 00:21:12.048 { 00:21:12.048 "subsystem": "keyring", 00:21:12.048 "config": [ 00:21:12.048 { 00:21:12.048 "method": "keyring_file_add_key", 00:21:12.048 "params": { 00:21:12.048 "name": "key0", 00:21:12.048 "path": "/tmp/tmp.p1UQ7icB8o" 00:21:12.048 } 00:21:12.048 } 00:21:12.048 ] 00:21:12.048 }, 00:21:12.048 { 00:21:12.048 "subsystem": "iobuf", 00:21:12.048 "config": [ 00:21:12.048 { 00:21:12.048 "method": "iobuf_set_options", 00:21:12.048 "params": { 00:21:12.048 "small_pool_count": 8192, 00:21:12.048 "large_pool_count": 1024, 00:21:12.048 "small_bufsize": 8192, 00:21:12.048 "large_bufsize": 135168 00:21:12.048 } 00:21:12.048 } 00:21:12.048 ] 00:21:12.048 }, 00:21:12.048 { 00:21:12.048 "subsystem": "sock", 00:21:12.048 "config": [ 00:21:12.048 { 00:21:12.048 "method": "sock_set_default_impl", 00:21:12.048 "params": { 00:21:12.048 "impl_name": "posix" 00:21:12.048 } 00:21:12.048 }, 00:21:12.048 { 00:21:12.048 "method": "sock_impl_set_options", 00:21:12.048 "params": { 00:21:12.048 "impl_name": "ssl", 00:21:12.048 "recv_buf_size": 4096, 00:21:12.048 "send_buf_size": 4096, 00:21:12.048 "enable_recv_pipe": true, 00:21:12.048 "enable_quickack": false, 00:21:12.048 "enable_placement_id": 0, 00:21:12.048 "enable_zerocopy_send_server": true, 00:21:12.048 "enable_zerocopy_send_client": false, 00:21:12.048 "zerocopy_threshold": 0, 00:21:12.048 "tls_version": 0, 00:21:12.048 "enable_ktls": false 00:21:12.048 } 00:21:12.048 }, 00:21:12.048 { 00:21:12.048 "method": "sock_impl_set_options", 00:21:12.048 "params": { 00:21:12.048 "impl_name": "posix", 00:21:12.048 "recv_buf_size": 2097152, 00:21:12.048 "send_buf_size": 2097152, 00:21:12.048 "enable_recv_pipe": true, 00:21:12.048 "enable_quickack": false, 00:21:12.048 "enable_placement_id": 0, 00:21:12.048 "enable_zerocopy_send_server": true, 00:21:12.048 "enable_zerocopy_send_client": false, 00:21:12.048 "zerocopy_threshold": 0, 00:21:12.048 "tls_version": 0, 00:21:12.048 "enable_ktls": false 00:21:12.048 } 00:21:12.048 } 00:21:12.048 ] 00:21:12.048 }, 00:21:12.048 { 00:21:12.048 "subsystem": "vmd", 00:21:12.048 "config": [] 00:21:12.048 }, 00:21:12.048 { 00:21:12.048 "subsystem": "accel", 00:21:12.048 "config": [ 00:21:12.048 { 00:21:12.048 "method": "accel_set_options", 00:21:12.048 "params": { 00:21:12.048 "small_cache_size": 128, 00:21:12.048 "large_cache_size": 16, 00:21:12.048 "task_count": 2048, 00:21:12.048 "sequence_count": 2048, 00:21:12.048 "buf_count": 2048 00:21:12.048 } 00:21:12.048 } 00:21:12.048 ] 00:21:12.048 }, 00:21:12.048 { 00:21:12.048 "subsystem": "bdev", 00:21:12.048 "config": [ 00:21:12.048 { 00:21:12.048 "method": "bdev_set_options", 00:21:12.048 "params": { 00:21:12.048 "bdev_io_pool_size": 65535, 00:21:12.048 "bdev_io_cache_size": 256, 00:21:12.048 "bdev_auto_examine": true, 00:21:12.048 "iobuf_small_cache_size": 128, 00:21:12.048 "iobuf_large_cache_size": 16 00:21:12.048 } 00:21:12.048 }, 00:21:12.048 { 00:21:12.048 "method": "bdev_raid_set_options", 00:21:12.048 "params": { 00:21:12.048 "process_window_size_kb": 1024, 00:21:12.048 "process_max_bandwidth_mb_sec": 0 00:21:12.048 } 00:21:12.048 }, 00:21:12.048 { 00:21:12.048 "method": "bdev_iscsi_set_options", 00:21:12.048 "params": { 00:21:12.048 "timeout_sec": 30 00:21:12.048 } 00:21:12.048 }, 00:21:12.048 { 00:21:12.048 "method": "bdev_nvme_set_options", 00:21:12.048 "params": { 00:21:12.048 "action_on_timeout": "none", 00:21:12.048 "timeout_us": 0, 00:21:12.048 "timeout_admin_us": 0, 00:21:12.048 "keep_alive_timeout_ms": 10000, 00:21:12.048 "arbitration_burst": 0, 00:21:12.048 "low_priority_weight": 0, 00:21:12.048 "medium_priority_weight": 0, 00:21:12.048 "high_priority_weight": 0, 00:21:12.048 "nvme_adminq_poll_period_us": 10000, 00:21:12.048 "nvme_ioq_poll_period_us": 0, 00:21:12.048 "io_queue_requests": 512, 00:21:12.048 "delay_cmd_submit": true, 00:21:12.048 "transport_retry_count": 4, 00:21:12.048 "bdev_retry_count": 3, 00:21:12.048 "transport_ack_timeout": 0, 00:21:12.048 "ctrlr_loss_timeout_sec": 0, 00:21:12.048 "reconnect_delay_sec": 0, 00:21:12.048 "fast_io_fail_timeout_sec": 0, 00:21:12.048 "disable_auto_failback": false, 00:21:12.048 "generate_uuids": false, 00:21:12.048 "transport_tos": 0, 00:21:12.048 "nvme_error_stat": false, 00:21:12.048 "rdma_srq_size": 0, 00:21:12.048 "io_path_stat": false, 00:21:12.048 "allow_accel_sequence": false, 00:21:12.048 "rdma_max_cq_size": 0, 00:21:12.048 "rdma_cm_event_timeout_ms": 0, 00:21:12.048 "dhchap_digests": [ 00:21:12.048 "sha256", 00:21:12.048 "sha384", 00:21:12.048 "sha512" 00:21:12.048 ], 00:21:12.048 "dhchap_dhgroups": [ 00:21:12.048 "null", 00:21:12.048 "ffdhe2048", 00:21:12.048 "ffdhe3072", 00:21:12.048 "ffdhe4096", 00:21:12.048 "ffdhe6144", 00:21:12.048 "ffdhe8192" 00:21:12.048 ] 00:21:12.048 } 00:21:12.048 }, 00:21:12.048 { 00:21:12.048 "method": "bdev_nvme_attach_controller", 00:21:12.048 "params": { 00:21:12.048 "name": "nvme0", 00:21:12.048 "trtype": "TCP", 00:21:12.048 "adrfam": "IPv4", 00:21:12.048 "traddr": "10.0.0.2", 00:21:12.048 "trsvcid": "4420", 00:21:12.048 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.048 "prchk_reftag": false, 00:21:12.049 "prchk_guard": false, 00:21:12.049 "ctrlr_loss_timeout_sec": 0, 00:21:12.049 "reconnect_delay_sec": 0, 00:21:12.049 "fast_io_fail_timeout_sec": 0, 00:21:12.049 "psk": "key0", 00:21:12.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:12.049 "hdgst": false, 00:21:12.049 "ddgst": false 00:21:12.049 } 00:21:12.049 }, 00:21:12.049 { 00:21:12.049 "method": "bdev_nvme_set_hotplug", 00:21:12.049 "params": { 00:21:12.049 "period_us": 100000, 00:21:12.049 "enable": false 00:21:12.049 } 00:21:12.049 }, 00:21:12.049 { 00:21:12.049 "method": "bdev_enable_histogram", 00:21:12.049 "params": { 00:21:12.049 "name": "nvme0n1", 00:21:12.049 "enable": true 00:21:12.049 } 00:21:12.049 }, 00:21:12.049 { 00:21:12.049 "method": "bdev_wait_for_examine" 00:21:12.049 } 00:21:12.049 ] 00:21:12.049 }, 00:21:12.049 { 00:21:12.049 "subsystem": "nbd", 00:21:12.049 "config": [] 00:21:12.049 } 00:21:12.049 ] 00:21:12.049 }' 00:21:12.049 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.307 [2024-07-25 10:36:15.762919] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:21:12.307 [2024-07-25 10:36:15.762973] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928864 ] 00:21:12.307 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.307 [2024-07-25 10:36:15.833587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.307 [2024-07-25 10:36:15.908955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.564 [2024-07-25 10:36:16.059819] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.129 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.129 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:13.129 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:13.129 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:13.129 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.129 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:13.129 Running I/O for 1 seconds... 00:21:14.500 00:21:14.500 Latency(us) 00:21:14.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.500 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:14.500 Verification LBA range: start 0x0 length 0x2000 00:21:14.500 nvme0n1 : 1.03 4319.89 16.87 0.00 0.00 29147.98 7025.46 67947.72 00:21:14.500 =================================================================================================================== 00:21:14.500 Total : 4319.89 16.87 0.00 0.00 29147.98 7025.46 67947.72 00:21:14.500 0 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:14.500 nvmf_trace.0 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3928864 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3928864 ']' 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3928864 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:14.500 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3928864 00:21:14.500 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:14.500 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:14.500 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3928864' 00:21:14.500 killing process with pid 3928864 00:21:14.500 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3928864 00:21:14.500 Received shutdown signal, test time was about 1.000000 seconds 00:21:14.500 00:21:14.500 Latency(us) 00:21:14.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.500 =================================================================================================================== 00:21:14.500 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.500 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3928864 00:21:14.500 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:14.500 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:14.500 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:14.500 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:14.500 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:14.500 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:14.500 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:14.500 rmmod nvme_tcp 00:21:14.758 rmmod nvme_fabrics 00:21:14.758 rmmod nvme_keyring 00:21:14.758 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:14.758 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:14.758 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:14.758 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3928623 ']' 00:21:14.758 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3928623 00:21:14.758 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3928623 ']' 00:21:14.758 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3928623 00:21:14.758 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:14.758 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:14.758 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3928623 00:21:14.758 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:14.758 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:14.758 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3928623' 00:21:14.758 killing process with pid 3928623 00:21:14.758 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3928623 00:21:14.758 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3928623 00:21:15.015 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:15.016 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:15.016 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:15.016 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:15.016 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:15.016 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.016 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.016 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.915 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:16.915 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.El70dmCQVW /tmp/tmp.5C9quFN3EG /tmp/tmp.p1UQ7icB8o 00:21:16.915 00:21:16.915 real 1m26.294s 00:21:16.915 user 2m5.893s 00:21:16.915 sys 0m35.630s 00:21:16.915 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:16.915 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.915 ************************************ 00:21:16.915 END TEST nvmf_tls 00:21:16.915 ************************************ 00:21:16.915 10:36:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:16.915 10:36:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:16.915 10:36:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:16.915 10:36:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:17.174 ************************************ 00:21:17.174 START TEST nvmf_fips 00:21:17.174 ************************************ 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:17.174 * Looking for test storage... 00:21:17.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:17.174 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:17.175 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:21:17.433 Error setting digest 00:21:17.433 00C28CBA027F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:17.433 00C28CBA027F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:17.433 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:24.026 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:24.026 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:24.027 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:24.027 Found net devices under 0000:af:00.0: cvl_0_0 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:24.027 Found net devices under 0000:af:00.1: cvl_0_1 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:24.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:21:24.027 00:21:24.027 --- 10.0.0.2 ping statistics --- 00:21:24.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.027 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:21:24.027 00:21:24.027 --- 10.0.0.1 ping statistics --- 00:21:24.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.027 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3932893 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3932893 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3932893 ']' 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:24.027 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:24.027 [2024-07-25 10:36:27.723380] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:21:24.027 [2024-07-25 10:36:27.723433] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.286 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.286 [2024-07-25 10:36:27.798060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.286 [2024-07-25 10:36:27.869009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.286 [2024-07-25 10:36:27.869050] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.286 [2024-07-25 10:36:27.869059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.286 [2024-07-25 10:36:27.869067] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.286 [2024-07-25 10:36:27.869074] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.286 [2024-07-25 10:36:27.869095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.852 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.852 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:24.852 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:24.852 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:24.852 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:24.852 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.852 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:24.852 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:24.852 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:24.852 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:24.852 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:24.852 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:24.852 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:24.852 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:25.110 [2024-07-25 10:36:28.703323] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.110 [2024-07-25 10:36:28.719338] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:25.110 [2024-07-25 10:36:28.719513] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.110 [2024-07-25 10:36:28.747529] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:25.110 malloc0 00:21:25.110 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:25.110 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3933177 00:21:25.110 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:25.110 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3933177 /var/tmp/bdevperf.sock 00:21:25.110 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3933177 ']' 00:21:25.110 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.110 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:25.110 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.110 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:25.110 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:25.369 [2024-07-25 10:36:28.836390] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:21:25.369 [2024-07-25 10:36:28.836441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933177 ] 00:21:25.369 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.369 [2024-07-25 10:36:28.902091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.369 [2024-07-25 10:36:28.973663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.936 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:25.936 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:25.936 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:26.195 [2024-07-25 10:36:29.775532] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:26.195 [2024-07-25 10:36:29.775618] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:26.195 TLSTESTn1 00:21:26.195 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:26.454 Running I/O for 10 seconds... 00:21:36.438 00:21:36.438 Latency(us) 00:21:36.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.438 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:36.438 Verification LBA range: start 0x0 length 0x2000 00:21:36.438 TLSTESTn1 : 10.02 4711.55 18.40 0.00 0.00 27118.11 6212.81 76336.33 00:21:36.438 =================================================================================================================== 00:21:36.438 Total : 4711.55 18.40 0.00 0.00 27118.11 6212.81 76336.33 00:21:36.438 0 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:36.438 nvmf_trace.0 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3933177 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3933177 ']' 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3933177 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:36.438 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3933177 00:21:36.696 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:36.696 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:36.696 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3933177' 00:21:36.696 killing process with pid 3933177 00:21:36.696 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3933177 00:21:36.696 Received shutdown signal, test time was about 10.000000 seconds 00:21:36.696 00:21:36.696 Latency(us) 00:21:36.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.696 =================================================================================================================== 00:21:36.696 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:36.696 [2024-07-25 10:36:40.164274] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:36.696 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3933177 00:21:36.696 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:36.696 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:36.696 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:36.696 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:36.696 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:36.696 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:36.696 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:36.696 rmmod nvme_tcp 00:21:36.696 rmmod nvme_fabrics 00:21:36.696 rmmod nvme_keyring 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3932893 ']' 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3932893 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3932893 ']' 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3932893 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3932893 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3932893' 00:21:36.955 killing process with pid 3932893 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3932893 00:21:36.955 [2024-07-25 10:36:40.460795] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3932893 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:36.955 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:36.956 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:36.956 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.956 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.956 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.489 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:39.489 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.489 00:21:39.489 real 0m22.069s 00:21:39.489 user 0m21.711s 00:21:39.489 sys 0m11.234s 00:21:39.489 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:39.489 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:39.489 ************************************ 00:21:39.489 END TEST nvmf_fips 00:21:39.489 ************************************ 00:21:39.489 10:36:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:21:39.489 10:36:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:21:39.489 10:36:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:21:39.489 10:36:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:21:39.489 10:36:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:21:39.489 10:36:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.059 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:46.060 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:46.060 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:46.060 Found net devices under 0000:af:00.0: cvl_0_0 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:46.060 Found net devices under 0000:af:00.1: cvl_0_1 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:46.060 ************************************ 00:21:46.060 START TEST nvmf_perf_adq 00:21:46.060 ************************************ 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:46.060 * Looking for test storage... 00:21:46.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:46.060 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.061 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.061 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.061 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.061 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.061 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.061 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.061 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.061 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:46.061 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:46.061 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:52.631 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:52.631 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:52.631 Found net devices under 0000:af:00.0: cvl_0_0 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.631 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.632 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.632 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.632 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.632 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.632 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:52.632 Found net devices under 0000:af:00.1: cvl_0_1 00:21:52.632 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.632 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:52.632 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.632 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:52.632 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:52.632 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:52.632 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:53.268 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:55.177 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:00.451 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:00.452 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:00.452 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:00.452 Found net devices under 0000:af:00.0: cvl_0_0 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:00.452 Found net devices under 0000:af:00.1: cvl_0_1 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:00.452 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.453 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.453 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.453 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.453 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:00.453 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:00.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:22:00.712 00:22:00.712 --- 10.0.0.2 ping statistics --- 00:22:00.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.712 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:22:00.712 00:22:00.712 --- 10.0.0.1 ping statistics --- 00:22:00.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.712 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3943329 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3943329 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3943329 ']' 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.712 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:00.712 [2024-07-25 10:37:04.314119] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:22:00.712 [2024-07-25 10:37:04.314172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.712 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.712 [2024-07-25 10:37:04.388426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:00.972 [2024-07-25 10:37:04.463762] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.972 [2024-07-25 10:37:04.463799] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.972 [2024-07-25 10:37:04.463808] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.972 [2024-07-25 10:37:04.463817] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.972 [2024-07-25 10:37:04.463824] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.972 [2024-07-25 10:37:04.463870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.972 [2024-07-25 10:37:04.463966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.972 [2024-07-25 10:37:04.464050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.972 [2024-07-25 10:37:04.464052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.541 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.801 [2024-07-25 10:37:05.321227] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.801 Malloc1 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.801 [2024-07-25 10:37:05.375849] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.801 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3943618 00:22:01.802 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:01.802 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:01.802 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.710 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:03.710 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.710 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.970 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.970 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:03.970 "tick_rate": 2500000000, 00:22:03.970 "poll_groups": [ 00:22:03.970 { 00:22:03.970 "name": "nvmf_tgt_poll_group_000", 00:22:03.970 "admin_qpairs": 1, 00:22:03.970 "io_qpairs": 1, 00:22:03.970 "current_admin_qpairs": 1, 00:22:03.970 "current_io_qpairs": 1, 00:22:03.970 "pending_bdev_io": 0, 00:22:03.970 "completed_nvme_io": 21077, 00:22:03.970 "transports": [ 00:22:03.970 { 00:22:03.970 "trtype": "TCP" 00:22:03.970 } 00:22:03.970 ] 00:22:03.970 }, 00:22:03.970 { 00:22:03.970 "name": "nvmf_tgt_poll_group_001", 00:22:03.970 "admin_qpairs": 0, 00:22:03.970 "io_qpairs": 1, 00:22:03.970 "current_admin_qpairs": 0, 00:22:03.970 "current_io_qpairs": 1, 00:22:03.970 "pending_bdev_io": 0, 00:22:03.970 "completed_nvme_io": 20784, 00:22:03.970 "transports": [ 00:22:03.970 { 00:22:03.970 "trtype": "TCP" 00:22:03.970 } 00:22:03.970 ] 00:22:03.970 }, 00:22:03.970 { 00:22:03.970 "name": "nvmf_tgt_poll_group_002", 00:22:03.970 "admin_qpairs": 0, 00:22:03.970 "io_qpairs": 1, 00:22:03.970 "current_admin_qpairs": 0, 00:22:03.970 "current_io_qpairs": 1, 00:22:03.970 "pending_bdev_io": 0, 00:22:03.970 "completed_nvme_io": 21356, 00:22:03.970 "transports": [ 00:22:03.970 { 00:22:03.970 "trtype": "TCP" 00:22:03.970 } 00:22:03.970 ] 00:22:03.970 }, 00:22:03.970 { 00:22:03.970 "name": "nvmf_tgt_poll_group_003", 00:22:03.970 "admin_qpairs": 0, 00:22:03.970 "io_qpairs": 1, 00:22:03.970 "current_admin_qpairs": 0, 00:22:03.970 "current_io_qpairs": 1, 00:22:03.970 "pending_bdev_io": 0, 00:22:03.970 "completed_nvme_io": 21250, 00:22:03.970 "transports": [ 00:22:03.970 { 00:22:03.970 "trtype": "TCP" 00:22:03.970 } 00:22:03.970 ] 00:22:03.970 } 00:22:03.970 ] 00:22:03.970 }' 00:22:03.970 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:03.970 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:03.970 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:03.970 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:03.970 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3943618 00:22:12.104 Initializing NVMe Controllers 00:22:12.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:12.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:12.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:12.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:12.104 Initialization complete. Launching workers. 00:22:12.104 ======================================================== 00:22:12.104 Latency(us) 00:22:12.104 Device Information : IOPS MiB/s Average min max 00:22:12.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11190.61 43.71 5719.46 1961.11 9014.55 00:22:12.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11060.11 43.20 5786.36 2427.63 10757.44 00:22:12.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11263.60 44.00 5681.72 1907.42 10510.41 00:22:12.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11212.21 43.80 5708.16 2015.23 10683.94 00:22:12.104 ======================================================== 00:22:12.104 Total : 44726.52 174.71 5723.67 1907.42 10757.44 00:22:12.104 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:12.104 rmmod nvme_tcp 00:22:12.104 rmmod nvme_fabrics 00:22:12.104 rmmod nvme_keyring 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3943329 ']' 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3943329 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3943329 ']' 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3943329 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3943329 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3943329' 00:22:12.104 killing process with pid 3943329 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3943329 00:22:12.104 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3943329 00:22:12.364 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:12.364 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:12.364 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:12.364 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:12.364 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:12.364 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.364 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.364 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.274 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:14.274 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:14.274 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:15.656 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:18.234 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:23.514 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:23.514 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:23.515 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:23.515 Found net devices under 0000:af:00.0: cvl_0_0 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:23.515 Found net devices under 0000:af:00.1: cvl_0_1 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:23.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:22:23.515 00:22:23.515 --- 10.0.0.2 ping statistics --- 00:22:23.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.515 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:22:23.515 00:22:23.515 --- 10.0.0.1 ping statistics --- 00:22:23.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.515 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:23.515 net.core.busy_poll = 1 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:23.515 net.core.busy_read = 1 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:23.515 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:23.515 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:23.515 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:23.515 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:23.515 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.515 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3947464 00:22:23.515 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3947464 00:22:23.515 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:23.515 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3947464 ']' 00:22:23.515 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.515 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.515 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.515 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.515 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.516 [2024-07-25 10:37:27.107600] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:22:23.516 [2024-07-25 10:37:27.107656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.516 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.516 [2024-07-25 10:37:27.182184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:23.774 [2024-07-25 10:37:27.256467] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.774 [2024-07-25 10:37:27.256507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.774 [2024-07-25 10:37:27.256516] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.774 [2024-07-25 10:37:27.256525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.774 [2024-07-25 10:37:27.256532] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.774 [2024-07-25 10:37:27.256579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.774 [2024-07-25 10:37:27.256673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.774 [2024-07-25 10:37:27.256755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.774 [2024-07-25 10:37:27.256757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.343 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.343 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.343 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:24.343 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.343 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.603 [2024-07-25 10:37:28.104172] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.603 Malloc1 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.603 [2024-07-25 10:37:28.154581] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3947713 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:24.603 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:24.603 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.509 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:26.509 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.509 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:26.509 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.509 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:26.509 "tick_rate": 2500000000, 00:22:26.509 "poll_groups": [ 00:22:26.509 { 00:22:26.509 "name": "nvmf_tgt_poll_group_000", 00:22:26.509 "admin_qpairs": 1, 00:22:26.509 "io_qpairs": 2, 00:22:26.509 "current_admin_qpairs": 1, 00:22:26.509 "current_io_qpairs": 2, 00:22:26.509 "pending_bdev_io": 0, 00:22:26.510 "completed_nvme_io": 28884, 00:22:26.510 "transports": [ 00:22:26.510 { 00:22:26.510 "trtype": "TCP" 00:22:26.510 } 00:22:26.510 ] 00:22:26.510 }, 00:22:26.510 { 00:22:26.510 "name": "nvmf_tgt_poll_group_001", 00:22:26.510 "admin_qpairs": 0, 00:22:26.510 "io_qpairs": 2, 00:22:26.510 "current_admin_qpairs": 0, 00:22:26.510 "current_io_qpairs": 2, 00:22:26.510 "pending_bdev_io": 0, 00:22:26.510 "completed_nvme_io": 29471, 00:22:26.510 "transports": [ 00:22:26.510 { 00:22:26.510 "trtype": "TCP" 00:22:26.510 } 00:22:26.510 ] 00:22:26.510 }, 00:22:26.510 { 00:22:26.510 "name": "nvmf_tgt_poll_group_002", 00:22:26.510 "admin_qpairs": 0, 00:22:26.510 "io_qpairs": 0, 00:22:26.510 "current_admin_qpairs": 0, 00:22:26.510 "current_io_qpairs": 0, 00:22:26.510 "pending_bdev_io": 0, 00:22:26.510 "completed_nvme_io": 0, 00:22:26.510 "transports": [ 00:22:26.510 { 00:22:26.510 "trtype": "TCP" 00:22:26.510 } 00:22:26.510 ] 00:22:26.510 }, 00:22:26.510 { 00:22:26.510 "name": "nvmf_tgt_poll_group_003", 00:22:26.510 "admin_qpairs": 0, 00:22:26.510 "io_qpairs": 0, 00:22:26.510 "current_admin_qpairs": 0, 00:22:26.510 "current_io_qpairs": 0, 00:22:26.510 "pending_bdev_io": 0, 00:22:26.510 "completed_nvme_io": 0, 00:22:26.510 "transports": [ 00:22:26.510 { 00:22:26.510 "trtype": "TCP" 00:22:26.510 } 00:22:26.510 ] 00:22:26.510 } 00:22:26.510 ] 00:22:26.510 }' 00:22:26.510 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:26.510 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:26.769 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:26.769 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:26.769 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3947713 00:22:34.891 Initializing NVMe Controllers 00:22:34.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:34.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:34.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:34.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:34.891 Initialization complete. Launching workers. 00:22:34.891 ======================================================== 00:22:34.891 Latency(us) 00:22:34.891 Device Information : IOPS MiB/s Average min max 00:22:34.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8091.40 31.61 7910.00 1606.52 52728.34 00:22:34.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8276.90 32.33 7732.05 1496.99 52614.68 00:22:34.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7172.30 28.02 8925.51 1458.86 53941.22 00:22:34.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7340.10 28.67 8726.06 1519.66 53164.64 00:22:34.891 ======================================================== 00:22:34.891 Total : 30880.70 120.63 8292.13 1458.86 53941.22 00:22:34.891 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:34.891 rmmod nvme_tcp 00:22:34.891 rmmod nvme_fabrics 00:22:34.891 rmmod nvme_keyring 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3947464 ']' 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3947464 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3947464 ']' 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3947464 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3947464 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3947464' 00:22:34.891 killing process with pid 3947464 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3947464 00:22:34.891 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3947464 00:22:35.150 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:35.150 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:35.150 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:35.150 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.150 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.150 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.150 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.150 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.441 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:38.441 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:38.441 00:22:38.441 real 0m52.620s 00:22:38.441 user 2m46.456s 00:22:38.441 sys 0m13.920s 00:22:38.441 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:38.441 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.441 ************************************ 00:22:38.441 END TEST nvmf_perf_adq 00:22:38.441 ************************************ 00:22:38.441 10:37:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:38.441 10:37:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:38.441 10:37:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:38.441 10:37:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:38.441 ************************************ 00:22:38.441 START TEST nvmf_shutdown 00:22:38.441 ************************************ 00:22:38.441 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:38.441 * Looking for test storage... 00:22:38.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:38.442 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:38.442 ************************************ 00:22:38.442 START TEST nvmf_shutdown_tc1 00:22:38.442 ************************************ 00:22:38.442 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:38.442 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:38.442 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:38.442 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.442 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.442 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.442 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.442 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.442 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.442 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.442 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.442 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.442 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.442 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.442 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.015 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:45.015 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:45.016 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:45.016 Found net devices under 0000:af:00.0: cvl_0_0 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:45.016 Found net devices under 0000:af:00.1: cvl_0_1 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.016 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:45.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:22:45.016 00:22:45.016 --- 10.0.0.2 ping statistics --- 00:22:45.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.016 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:22:45.016 00:22:45.016 --- 10.0.0.1 ping statistics --- 00:22:45.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.016 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3953251 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3953251 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3953251 ']' 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.016 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.016 [2024-07-25 10:37:48.359215] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:22:45.016 [2024-07-25 10:37:48.359266] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.016 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.017 [2024-07-25 10:37:48.434400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.017 [2024-07-25 10:37:48.503533] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.017 [2024-07-25 10:37:48.503577] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.017 [2024-07-25 10:37:48.503586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.017 [2024-07-25 10:37:48.503598] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.017 [2024-07-25 10:37:48.503604] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.017 [2024-07-25 10:37:48.503712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.017 [2024-07-25 10:37:48.503808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.017 [2024-07-25 10:37:48.503917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.017 [2024-07-25 10:37:48.503919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.585 [2024-07-25 10:37:49.214080] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:45.585 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:45.586 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:45.586 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:45.586 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:45.586 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.586 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.845 Malloc1 00:22:45.845 [2024-07-25 10:37:49.324982] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.845 Malloc2 00:22:45.845 Malloc3 00:22:45.845 Malloc4 00:22:45.845 Malloc5 00:22:45.845 Malloc6 00:22:46.115 Malloc7 00:22:46.115 Malloc8 00:22:46.115 Malloc9 00:22:46.115 Malloc10 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3953518 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3953518 /var/tmp/bdevperf.sock 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3953518 ']' 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.115 { 00:22:46.115 "params": { 00:22:46.115 "name": "Nvme$subsystem", 00:22:46.115 "trtype": "$TEST_TRANSPORT", 00:22:46.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.115 "adrfam": "ipv4", 00:22:46.115 "trsvcid": "$NVMF_PORT", 00:22:46.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.115 "hdgst": ${hdgst:-false}, 00:22:46.115 "ddgst": ${ddgst:-false} 00:22:46.115 }, 00:22:46.115 "method": "bdev_nvme_attach_controller" 00:22:46.115 } 00:22:46.115 EOF 00:22:46.115 )") 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.115 { 00:22:46.115 "params": { 00:22:46.115 "name": "Nvme$subsystem", 00:22:46.115 "trtype": "$TEST_TRANSPORT", 00:22:46.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.115 "adrfam": "ipv4", 00:22:46.115 "trsvcid": "$NVMF_PORT", 00:22:46.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.115 "hdgst": ${hdgst:-false}, 00:22:46.115 "ddgst": ${ddgst:-false} 00:22:46.115 }, 00:22:46.115 "method": "bdev_nvme_attach_controller" 00:22:46.115 } 00:22:46.115 EOF 00:22:46.115 )") 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.115 { 00:22:46.115 "params": { 00:22:46.115 "name": "Nvme$subsystem", 00:22:46.115 "trtype": "$TEST_TRANSPORT", 00:22:46.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.115 "adrfam": "ipv4", 00:22:46.115 "trsvcid": "$NVMF_PORT", 00:22:46.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.115 "hdgst": ${hdgst:-false}, 00:22:46.115 "ddgst": ${ddgst:-false} 00:22:46.115 }, 00:22:46.115 "method": "bdev_nvme_attach_controller" 00:22:46.115 } 00:22:46.115 EOF 00:22:46.115 )") 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.115 { 00:22:46.115 "params": { 00:22:46.115 "name": "Nvme$subsystem", 00:22:46.115 "trtype": "$TEST_TRANSPORT", 00:22:46.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.115 "adrfam": "ipv4", 00:22:46.115 "trsvcid": "$NVMF_PORT", 00:22:46.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.115 "hdgst": ${hdgst:-false}, 00:22:46.115 "ddgst": ${ddgst:-false} 00:22:46.115 }, 00:22:46.115 "method": "bdev_nvme_attach_controller" 00:22:46.115 } 00:22:46.115 EOF 00:22:46.115 )") 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.115 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.115 { 00:22:46.115 "params": { 00:22:46.115 "name": "Nvme$subsystem", 00:22:46.115 "trtype": "$TEST_TRANSPORT", 00:22:46.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.115 "adrfam": "ipv4", 00:22:46.115 "trsvcid": "$NVMF_PORT", 00:22:46.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.115 "hdgst": ${hdgst:-false}, 00:22:46.115 "ddgst": ${ddgst:-false} 00:22:46.115 }, 00:22:46.115 "method": "bdev_nvme_attach_controller" 00:22:46.115 } 00:22:46.115 EOF 00:22:46.115 )") 00:22:46.116 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:46.116 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.116 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.116 { 00:22:46.116 "params": { 00:22:46.116 "name": "Nvme$subsystem", 00:22:46.116 "trtype": "$TEST_TRANSPORT", 00:22:46.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.116 "adrfam": "ipv4", 00:22:46.116 "trsvcid": "$NVMF_PORT", 00:22:46.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.116 "hdgst": ${hdgst:-false}, 00:22:46.116 "ddgst": ${ddgst:-false} 00:22:46.116 }, 00:22:46.116 "method": "bdev_nvme_attach_controller" 00:22:46.116 } 00:22:46.116 EOF 00:22:46.116 )") 00:22:46.116 [2024-07-25 10:37:49.810815] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:22:46.116 [2024-07-25 10:37:49.810869] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:46.423 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:46.423 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.423 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.423 { 00:22:46.423 "params": { 00:22:46.423 "name": "Nvme$subsystem", 00:22:46.423 "trtype": "$TEST_TRANSPORT", 00:22:46.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.423 "adrfam": "ipv4", 00:22:46.423 "trsvcid": "$NVMF_PORT", 00:22:46.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.423 "hdgst": ${hdgst:-false}, 00:22:46.423 "ddgst": ${ddgst:-false} 00:22:46.423 }, 00:22:46.423 "method": "bdev_nvme_attach_controller" 00:22:46.423 } 00:22:46.423 EOF 00:22:46.423 )") 00:22:46.423 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:46.423 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.423 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.423 { 00:22:46.423 "params": { 00:22:46.423 "name": "Nvme$subsystem", 00:22:46.423 "trtype": "$TEST_TRANSPORT", 00:22:46.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.423 "adrfam": "ipv4", 00:22:46.423 "trsvcid": "$NVMF_PORT", 00:22:46.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.423 "hdgst": ${hdgst:-false}, 00:22:46.423 "ddgst": ${ddgst:-false} 00:22:46.423 }, 00:22:46.423 "method": "bdev_nvme_attach_controller" 00:22:46.423 } 00:22:46.423 EOF 00:22:46.423 )") 00:22:46.423 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:46.423 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.423 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.423 { 00:22:46.423 "params": { 00:22:46.423 "name": "Nvme$subsystem", 00:22:46.423 "trtype": "$TEST_TRANSPORT", 00:22:46.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.423 "adrfam": "ipv4", 00:22:46.423 "trsvcid": "$NVMF_PORT", 00:22:46.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.423 "hdgst": ${hdgst:-false}, 00:22:46.423 "ddgst": ${ddgst:-false} 00:22:46.423 }, 00:22:46.423 "method": "bdev_nvme_attach_controller" 00:22:46.423 } 00:22:46.423 EOF 00:22:46.423 )") 00:22:46.423 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:46.423 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.423 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.423 { 00:22:46.423 "params": { 00:22:46.423 "name": "Nvme$subsystem", 00:22:46.423 "trtype": "$TEST_TRANSPORT", 00:22:46.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.423 "adrfam": "ipv4", 00:22:46.424 "trsvcid": "$NVMF_PORT", 00:22:46.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.424 "hdgst": ${hdgst:-false}, 00:22:46.424 "ddgst": ${ddgst:-false} 00:22:46.424 }, 00:22:46.424 "method": "bdev_nvme_attach_controller" 00:22:46.424 } 00:22:46.424 EOF 00:22:46.424 )") 00:22:46.424 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.424 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:46.424 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:46.424 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:46.424 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:46.424 "params": { 00:22:46.424 "name": "Nvme1", 00:22:46.424 "trtype": "tcp", 00:22:46.424 "traddr": "10.0.0.2", 00:22:46.424 "adrfam": "ipv4", 00:22:46.424 "trsvcid": "4420", 00:22:46.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:46.424 "hdgst": false, 00:22:46.424 "ddgst": false 00:22:46.424 }, 00:22:46.424 "method": "bdev_nvme_attach_controller" 00:22:46.424 },{ 00:22:46.424 "params": { 00:22:46.424 "name": "Nvme2", 00:22:46.424 "trtype": "tcp", 00:22:46.424 "traddr": "10.0.0.2", 00:22:46.424 "adrfam": "ipv4", 00:22:46.424 "trsvcid": "4420", 00:22:46.424 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:46.424 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:46.424 "hdgst": false, 00:22:46.424 "ddgst": false 00:22:46.424 }, 00:22:46.424 "method": "bdev_nvme_attach_controller" 00:22:46.424 },{ 00:22:46.424 "params": { 00:22:46.424 "name": "Nvme3", 00:22:46.424 "trtype": "tcp", 00:22:46.424 "traddr": "10.0.0.2", 00:22:46.424 "adrfam": "ipv4", 00:22:46.424 "trsvcid": "4420", 00:22:46.424 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:46.424 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:46.424 "hdgst": false, 00:22:46.424 "ddgst": false 00:22:46.424 }, 00:22:46.424 "method": "bdev_nvme_attach_controller" 00:22:46.424 },{ 00:22:46.424 "params": { 00:22:46.424 "name": "Nvme4", 00:22:46.424 "trtype": "tcp", 00:22:46.424 "traddr": "10.0.0.2", 00:22:46.424 "adrfam": "ipv4", 00:22:46.424 "trsvcid": "4420", 00:22:46.424 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:46.424 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:46.424 "hdgst": false, 00:22:46.424 "ddgst": false 00:22:46.424 }, 00:22:46.424 "method": "bdev_nvme_attach_controller" 00:22:46.424 },{ 00:22:46.424 "params": { 00:22:46.424 "name": "Nvme5", 00:22:46.424 "trtype": "tcp", 00:22:46.424 "traddr": "10.0.0.2", 00:22:46.424 "adrfam": "ipv4", 00:22:46.424 "trsvcid": "4420", 00:22:46.424 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:46.424 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:46.424 "hdgst": false, 00:22:46.424 "ddgst": false 00:22:46.424 }, 00:22:46.424 "method": "bdev_nvme_attach_controller" 00:22:46.424 },{ 00:22:46.424 "params": { 00:22:46.424 "name": "Nvme6", 00:22:46.424 "trtype": "tcp", 00:22:46.424 "traddr": "10.0.0.2", 00:22:46.424 "adrfam": "ipv4", 00:22:46.424 "trsvcid": "4420", 00:22:46.424 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:46.424 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:46.424 "hdgst": false, 00:22:46.424 "ddgst": false 00:22:46.424 }, 00:22:46.424 "method": "bdev_nvme_attach_controller" 00:22:46.424 },{ 00:22:46.424 "params": { 00:22:46.424 "name": "Nvme7", 00:22:46.424 "trtype": "tcp", 00:22:46.424 "traddr": "10.0.0.2", 00:22:46.424 "adrfam": "ipv4", 00:22:46.424 "trsvcid": "4420", 00:22:46.424 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:46.424 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:46.424 "hdgst": false, 00:22:46.424 "ddgst": false 00:22:46.424 }, 00:22:46.424 "method": "bdev_nvme_attach_controller" 00:22:46.424 },{ 00:22:46.424 "params": { 00:22:46.424 "name": "Nvme8", 00:22:46.424 "trtype": "tcp", 00:22:46.424 "traddr": "10.0.0.2", 00:22:46.424 "adrfam": "ipv4", 00:22:46.424 "trsvcid": "4420", 00:22:46.424 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:46.424 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:46.424 "hdgst": false, 00:22:46.424 "ddgst": false 00:22:46.424 }, 00:22:46.424 "method": "bdev_nvme_attach_controller" 00:22:46.424 },{ 00:22:46.424 "params": { 00:22:46.424 "name": "Nvme9", 00:22:46.424 "trtype": "tcp", 00:22:46.424 "traddr": "10.0.0.2", 00:22:46.424 "adrfam": "ipv4", 00:22:46.424 "trsvcid": "4420", 00:22:46.424 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:46.424 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:46.424 "hdgst": false, 00:22:46.424 "ddgst": false 00:22:46.424 }, 00:22:46.424 "method": "bdev_nvme_attach_controller" 00:22:46.424 },{ 00:22:46.424 "params": { 00:22:46.424 "name": "Nvme10", 00:22:46.424 "trtype": "tcp", 00:22:46.424 "traddr": "10.0.0.2", 00:22:46.424 "adrfam": "ipv4", 00:22:46.424 "trsvcid": "4420", 00:22:46.424 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:46.424 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:46.424 "hdgst": false, 00:22:46.424 "ddgst": false 00:22:46.424 }, 00:22:46.424 "method": "bdev_nvme_attach_controller" 00:22:46.424 }' 00:22:46.424 [2024-07-25 10:37:49.883903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.424 [2024-07-25 10:37:49.953233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.803 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:47.803 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:47.803 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:47.803 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.803 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:47.803 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.803 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3953518 00:22:47.803 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:47.803 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:48.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3953518 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3953251 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.741 { 00:22:48.741 "params": { 00:22:48.741 "name": "Nvme$subsystem", 00:22:48.741 "trtype": "$TEST_TRANSPORT", 00:22:48.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.741 "adrfam": "ipv4", 00:22:48.741 "trsvcid": "$NVMF_PORT", 00:22:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.741 "hdgst": ${hdgst:-false}, 00:22:48.741 "ddgst": ${ddgst:-false} 00:22:48.741 }, 00:22:48.741 "method": "bdev_nvme_attach_controller" 00:22:48.741 } 00:22:48.741 EOF 00:22:48.741 )") 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.741 { 00:22:48.741 "params": { 00:22:48.741 "name": "Nvme$subsystem", 00:22:48.741 "trtype": "$TEST_TRANSPORT", 00:22:48.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.741 "adrfam": "ipv4", 00:22:48.741 "trsvcid": "$NVMF_PORT", 00:22:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.741 "hdgst": ${hdgst:-false}, 00:22:48.741 "ddgst": ${ddgst:-false} 00:22:48.741 }, 00:22:48.741 "method": "bdev_nvme_attach_controller" 00:22:48.741 } 00:22:48.741 EOF 00:22:48.741 )") 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.741 { 00:22:48.741 "params": { 00:22:48.741 "name": "Nvme$subsystem", 00:22:48.741 "trtype": "$TEST_TRANSPORT", 00:22:48.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.741 "adrfam": "ipv4", 00:22:48.741 "trsvcid": "$NVMF_PORT", 00:22:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.741 "hdgst": ${hdgst:-false}, 00:22:48.741 "ddgst": ${ddgst:-false} 00:22:48.741 }, 00:22:48.741 "method": "bdev_nvme_attach_controller" 00:22:48.741 } 00:22:48.741 EOF 00:22:48.741 )") 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.741 { 00:22:48.741 "params": { 00:22:48.741 "name": "Nvme$subsystem", 00:22:48.741 "trtype": "$TEST_TRANSPORT", 00:22:48.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.741 "adrfam": "ipv4", 00:22:48.741 "trsvcid": "$NVMF_PORT", 00:22:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.741 "hdgst": ${hdgst:-false}, 00:22:48.741 "ddgst": ${ddgst:-false} 00:22:48.741 }, 00:22:48.741 "method": "bdev_nvme_attach_controller" 00:22:48.741 } 00:22:48.741 EOF 00:22:48.741 )") 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.741 { 00:22:48.741 "params": { 00:22:48.741 "name": "Nvme$subsystem", 00:22:48.741 "trtype": "$TEST_TRANSPORT", 00:22:48.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.741 "adrfam": "ipv4", 00:22:48.741 "trsvcid": "$NVMF_PORT", 00:22:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.741 "hdgst": ${hdgst:-false}, 00:22:48.741 "ddgst": ${ddgst:-false} 00:22:48.741 }, 00:22:48.741 "method": "bdev_nvme_attach_controller" 00:22:48.741 } 00:22:48.741 EOF 00:22:48.741 )") 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.741 [2024-07-25 10:37:52.402498] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:22:48.741 [2024-07-25 10:37:52.402551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953947 ] 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.741 { 00:22:48.741 "params": { 00:22:48.741 "name": "Nvme$subsystem", 00:22:48.741 "trtype": "$TEST_TRANSPORT", 00:22:48.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.741 "adrfam": "ipv4", 00:22:48.741 "trsvcid": "$NVMF_PORT", 00:22:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.741 "hdgst": ${hdgst:-false}, 00:22:48.741 "ddgst": ${ddgst:-false} 00:22:48.741 }, 00:22:48.741 "method": "bdev_nvme_attach_controller" 00:22:48.741 } 00:22:48.741 EOF 00:22:48.741 )") 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.741 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.741 { 00:22:48.742 "params": { 00:22:48.742 "name": "Nvme$subsystem", 00:22:48.742 "trtype": "$TEST_TRANSPORT", 00:22:48.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.742 "adrfam": "ipv4", 00:22:48.742 "trsvcid": "$NVMF_PORT", 00:22:48.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.742 "hdgst": ${hdgst:-false}, 00:22:48.742 "ddgst": ${ddgst:-false} 00:22:48.742 }, 00:22:48.742 "method": "bdev_nvme_attach_controller" 00:22:48.742 } 00:22:48.742 EOF 00:22:48.742 )") 00:22:48.742 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.742 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.742 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.742 { 00:22:48.742 "params": { 00:22:48.742 "name": "Nvme$subsystem", 00:22:48.742 "trtype": "$TEST_TRANSPORT", 00:22:48.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.742 "adrfam": "ipv4", 00:22:48.742 "trsvcid": "$NVMF_PORT", 00:22:48.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.742 "hdgst": ${hdgst:-false}, 00:22:48.742 "ddgst": ${ddgst:-false} 00:22:48.742 }, 00:22:48.742 "method": "bdev_nvme_attach_controller" 00:22:48.742 } 00:22:48.742 EOF 00:22:48.742 )") 00:22:48.742 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.742 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.742 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.742 { 00:22:48.742 "params": { 00:22:48.742 "name": "Nvme$subsystem", 00:22:48.742 "trtype": "$TEST_TRANSPORT", 00:22:48.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.742 "adrfam": "ipv4", 00:22:48.742 "trsvcid": "$NVMF_PORT", 00:22:48.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.742 "hdgst": ${hdgst:-false}, 00:22:48.742 "ddgst": ${ddgst:-false} 00:22:48.742 }, 00:22:48.742 "method": "bdev_nvme_attach_controller" 00:22:48.742 } 00:22:48.742 EOF 00:22:48.742 )") 00:22:48.742 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.742 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.742 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.742 { 00:22:48.742 "params": { 00:22:48.742 "name": "Nvme$subsystem", 00:22:48.742 "trtype": "$TEST_TRANSPORT", 00:22:48.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.742 "adrfam": "ipv4", 00:22:48.742 "trsvcid": "$NVMF_PORT", 00:22:48.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.742 "hdgst": ${hdgst:-false}, 00:22:48.742 "ddgst": ${ddgst:-false} 00:22:48.742 }, 00:22:48.742 "method": "bdev_nvme_attach_controller" 00:22:48.742 } 00:22:48.742 EOF 00:22:48.742 )") 00:22:48.742 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.742 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.742 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:49.001 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:49.001 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:49.001 "params": { 00:22:49.001 "name": "Nvme1", 00:22:49.001 "trtype": "tcp", 00:22:49.001 "traddr": "10.0.0.2", 00:22:49.001 "adrfam": "ipv4", 00:22:49.001 "trsvcid": "4420", 00:22:49.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.001 "hdgst": false, 00:22:49.001 "ddgst": false 00:22:49.001 }, 00:22:49.001 "method": "bdev_nvme_attach_controller" 00:22:49.001 },{ 00:22:49.001 "params": { 00:22:49.001 "name": "Nvme2", 00:22:49.001 "trtype": "tcp", 00:22:49.001 "traddr": "10.0.0.2", 00:22:49.001 "adrfam": "ipv4", 00:22:49.001 "trsvcid": "4420", 00:22:49.001 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:49.001 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:49.001 "hdgst": false, 00:22:49.001 "ddgst": false 00:22:49.001 }, 00:22:49.001 "method": "bdev_nvme_attach_controller" 00:22:49.001 },{ 00:22:49.001 "params": { 00:22:49.001 "name": "Nvme3", 00:22:49.001 "trtype": "tcp", 00:22:49.001 "traddr": "10.0.0.2", 00:22:49.001 "adrfam": "ipv4", 00:22:49.001 "trsvcid": "4420", 00:22:49.001 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:49.001 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:49.001 "hdgst": false, 00:22:49.001 "ddgst": false 00:22:49.001 }, 00:22:49.001 "method": "bdev_nvme_attach_controller" 00:22:49.001 },{ 00:22:49.001 "params": { 00:22:49.001 "name": "Nvme4", 00:22:49.001 "trtype": "tcp", 00:22:49.001 "traddr": "10.0.0.2", 00:22:49.001 "adrfam": "ipv4", 00:22:49.001 "trsvcid": "4420", 00:22:49.001 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:49.001 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:49.001 "hdgst": false, 00:22:49.001 "ddgst": false 00:22:49.001 }, 00:22:49.001 "method": "bdev_nvme_attach_controller" 00:22:49.001 },{ 00:22:49.001 "params": { 00:22:49.001 "name": "Nvme5", 00:22:49.001 "trtype": "tcp", 00:22:49.001 "traddr": "10.0.0.2", 00:22:49.001 "adrfam": "ipv4", 00:22:49.001 "trsvcid": "4420", 00:22:49.001 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:49.001 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:49.001 "hdgst": false, 00:22:49.001 "ddgst": false 00:22:49.001 }, 00:22:49.001 "method": "bdev_nvme_attach_controller" 00:22:49.001 },{ 00:22:49.001 "params": { 00:22:49.001 "name": "Nvme6", 00:22:49.001 "trtype": "tcp", 00:22:49.001 "traddr": "10.0.0.2", 00:22:49.001 "adrfam": "ipv4", 00:22:49.001 "trsvcid": "4420", 00:22:49.001 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:49.001 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:49.001 "hdgst": false, 00:22:49.001 "ddgst": false 00:22:49.001 }, 00:22:49.001 "method": "bdev_nvme_attach_controller" 00:22:49.001 },{ 00:22:49.001 "params": { 00:22:49.001 "name": "Nvme7", 00:22:49.001 "trtype": "tcp", 00:22:49.001 "traddr": "10.0.0.2", 00:22:49.001 "adrfam": "ipv4", 00:22:49.001 "trsvcid": "4420", 00:22:49.001 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:49.001 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:49.001 "hdgst": false, 00:22:49.001 "ddgst": false 00:22:49.001 }, 00:22:49.001 "method": "bdev_nvme_attach_controller" 00:22:49.001 },{ 00:22:49.001 "params": { 00:22:49.001 "name": "Nvme8", 00:22:49.001 "trtype": "tcp", 00:22:49.001 "traddr": "10.0.0.2", 00:22:49.001 "adrfam": "ipv4", 00:22:49.001 "trsvcid": "4420", 00:22:49.001 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:49.001 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:49.001 "hdgst": false, 00:22:49.001 "ddgst": false 00:22:49.001 }, 00:22:49.001 "method": "bdev_nvme_attach_controller" 00:22:49.001 },{ 00:22:49.001 "params": { 00:22:49.001 "name": "Nvme9", 00:22:49.001 "trtype": "tcp", 00:22:49.001 "traddr": "10.0.0.2", 00:22:49.001 "adrfam": "ipv4", 00:22:49.001 "trsvcid": "4420", 00:22:49.001 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:49.001 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:49.001 "hdgst": false, 00:22:49.001 "ddgst": false 00:22:49.001 }, 00:22:49.001 "method": "bdev_nvme_attach_controller" 00:22:49.001 },{ 00:22:49.001 "params": { 00:22:49.001 "name": "Nvme10", 00:22:49.001 "trtype": "tcp", 00:22:49.001 "traddr": "10.0.0.2", 00:22:49.001 "adrfam": "ipv4", 00:22:49.001 "trsvcid": "4420", 00:22:49.001 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:49.001 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:49.001 "hdgst": false, 00:22:49.001 "ddgst": false 00:22:49.001 }, 00:22:49.001 "method": "bdev_nvme_attach_controller" 00:22:49.001 }' 00:22:49.001 [2024-07-25 10:37:52.476360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.001 [2024-07-25 10:37:52.546673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.378 Running I/O for 1 seconds... 00:22:51.755 00:22:51.755 Latency(us) 00:22:51.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.755 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.755 Verification LBA range: start 0x0 length 0x400 00:22:51.755 Nvme1n1 : 1.13 283.13 17.70 0.00 0.00 224080.69 16567.50 204682.04 00:22:51.755 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.755 Verification LBA range: start 0x0 length 0x400 00:22:51.755 Nvme2n1 : 1.12 285.37 17.84 0.00 0.00 219197.77 17825.79 199648.87 00:22:51.755 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.755 Verification LBA range: start 0x0 length 0x400 00:22:51.755 Nvme3n1 : 1.12 285.20 17.82 0.00 0.00 216453.28 32715.57 189582.54 00:22:51.755 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.755 Verification LBA range: start 0x0 length 0x400 00:22:51.755 Nvme4n1 : 1.13 283.81 17.74 0.00 0.00 214708.72 18350.08 223136.97 00:22:51.755 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.755 Verification LBA range: start 0x0 length 0x400 00:22:51.755 Nvme5n1 : 1.14 281.51 17.59 0.00 0.00 213569.54 17091.79 207198.62 00:22:51.755 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.755 Verification LBA range: start 0x0 length 0x400 00:22:51.755 Nvme6n1 : 1.14 281.94 17.62 0.00 0.00 210273.24 20552.09 203843.17 00:22:51.755 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.755 Verification LBA range: start 0x0 length 0x400 00:22:51.755 Nvme7n1 : 1.11 287.81 17.99 0.00 0.00 202510.99 16882.07 204682.04 00:22:51.755 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.755 Verification LBA range: start 0x0 length 0x400 00:22:51.755 Nvme8n1 : 1.15 335.30 20.96 0.00 0.00 171859.56 16777.22 202165.45 00:22:51.755 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.755 Verification LBA range: start 0x0 length 0x400 00:22:51.755 Nvme9n1 : 1.15 278.76 17.42 0.00 0.00 203893.15 16777.22 218103.81 00:22:51.755 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.755 Verification LBA range: start 0x0 length 0x400 00:22:51.755 Nvme10n1 : 1.14 284.45 17.78 0.00 0.00 196660.57 2149.58 229847.86 00:22:51.755 =================================================================================================================== 00:22:51.755 Total : 2887.27 180.45 0.00 0.00 206610.19 2149.58 229847.86 00:22:51.755 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:51.755 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:51.755 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:51.755 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:51.756 rmmod nvme_tcp 00:22:51.756 rmmod nvme_fabrics 00:22:51.756 rmmod nvme_keyring 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3953251 ']' 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3953251 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3953251 ']' 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3953251 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:51.756 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3953251 00:22:52.014 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:52.014 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:52.014 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3953251' 00:22:52.014 killing process with pid 3953251 00:22:52.014 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3953251 00:22:52.014 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3953251 00:22:52.273 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:52.273 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:52.273 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:52.273 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.273 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.273 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.273 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.273 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.810 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:54.810 00:22:54.810 real 0m15.940s 00:22:54.810 user 0m34.695s 00:22:54.810 sys 0m6.468s 00:22:54.810 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:54.810 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:54.810 ************************************ 00:22:54.810 END TEST nvmf_shutdown_tc1 00:22:54.810 ************************************ 00:22:54.810 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:54.810 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:54.810 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:54.810 10:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:54.810 ************************************ 00:22:54.810 START TEST nvmf_shutdown_tc2 00:22:54.810 ************************************ 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:54.810 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.810 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:54.811 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:54.811 Found net devices under 0000:af:00.0: cvl_0_0 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:54.811 Found net devices under 0000:af:00.1: cvl_0_1 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:54.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:22:54.811 00:22:54.811 --- 10.0.0.2 ping statistics --- 00:22:54.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.811 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:22:54.811 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:22:54.812 00:22:54.812 --- 10.0.0.1 ping statistics --- 00:22:54.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.812 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3955104 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3955104 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3955104 ']' 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.812 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:54.812 [2024-07-25 10:37:58.478536] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:22:54.812 [2024-07-25 10:37:58.478579] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.072 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.072 [2024-07-25 10:37:58.549149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.072 [2024-07-25 10:37:58.623399] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.072 [2024-07-25 10:37:58.623443] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.072 [2024-07-25 10:37:58.623452] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.072 [2024-07-25 10:37:58.623460] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.072 [2024-07-25 10:37:58.623484] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.072 [2024-07-25 10:37:58.623585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.072 [2024-07-25 10:37:58.623677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.072 [2024-07-25 10:37:58.623788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.072 [2024-07-25 10:37:58.623788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:55.640 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:55.640 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:55.640 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:55.640 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:55.640 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.640 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.640 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:55.640 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.640 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.640 [2024-07-25 10:37:59.339956] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.900 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.900 Malloc1 00:22:55.900 [2024-07-25 10:37:59.450862] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.900 Malloc2 00:22:55.900 Malloc3 00:22:55.900 Malloc4 00:22:55.900 Malloc5 00:22:56.160 Malloc6 00:22:56.160 Malloc7 00:22:56.160 Malloc8 00:22:56.160 Malloc9 00:22:56.160 Malloc10 00:22:56.160 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.160 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:56.160 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:56.160 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3955415 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3955415 /var/tmp/bdevperf.sock 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3955415 ']' 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:56.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.419 { 00:22:56.419 "params": { 00:22:56.419 "name": "Nvme$subsystem", 00:22:56.419 "trtype": "$TEST_TRANSPORT", 00:22:56.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.419 "adrfam": "ipv4", 00:22:56.419 "trsvcid": "$NVMF_PORT", 00:22:56.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.419 "hdgst": ${hdgst:-false}, 00:22:56.419 "ddgst": ${ddgst:-false} 00:22:56.419 }, 00:22:56.419 "method": "bdev_nvme_attach_controller" 00:22:56.419 } 00:22:56.419 EOF 00:22:56.419 )") 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:56.419 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.420 { 00:22:56.420 "params": { 00:22:56.420 "name": "Nvme$subsystem", 00:22:56.420 "trtype": "$TEST_TRANSPORT", 00:22:56.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.420 "adrfam": "ipv4", 00:22:56.420 "trsvcid": "$NVMF_PORT", 00:22:56.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.420 "hdgst": ${hdgst:-false}, 00:22:56.420 "ddgst": ${ddgst:-false} 00:22:56.420 }, 00:22:56.420 "method": "bdev_nvme_attach_controller" 00:22:56.420 } 00:22:56.420 EOF 00:22:56.420 )") 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.420 { 00:22:56.420 "params": { 00:22:56.420 "name": "Nvme$subsystem", 00:22:56.420 "trtype": "$TEST_TRANSPORT", 00:22:56.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.420 "adrfam": "ipv4", 00:22:56.420 "trsvcid": "$NVMF_PORT", 00:22:56.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.420 "hdgst": ${hdgst:-false}, 00:22:56.420 "ddgst": ${ddgst:-false} 00:22:56.420 }, 00:22:56.420 "method": "bdev_nvme_attach_controller" 00:22:56.420 } 00:22:56.420 EOF 00:22:56.420 )") 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.420 { 00:22:56.420 "params": { 00:22:56.420 "name": "Nvme$subsystem", 00:22:56.420 "trtype": "$TEST_TRANSPORT", 00:22:56.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.420 "adrfam": "ipv4", 00:22:56.420 "trsvcid": "$NVMF_PORT", 00:22:56.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.420 "hdgst": ${hdgst:-false}, 00:22:56.420 "ddgst": ${ddgst:-false} 00:22:56.420 }, 00:22:56.420 "method": "bdev_nvme_attach_controller" 00:22:56.420 } 00:22:56.420 EOF 00:22:56.420 )") 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.420 { 00:22:56.420 "params": { 00:22:56.420 "name": "Nvme$subsystem", 00:22:56.420 "trtype": "$TEST_TRANSPORT", 00:22:56.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.420 "adrfam": "ipv4", 00:22:56.420 "trsvcid": "$NVMF_PORT", 00:22:56.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.420 "hdgst": ${hdgst:-false}, 00:22:56.420 "ddgst": ${ddgst:-false} 00:22:56.420 }, 00:22:56.420 "method": "bdev_nvme_attach_controller" 00:22:56.420 } 00:22:56.420 EOF 00:22:56.420 )") 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.420 { 00:22:56.420 "params": { 00:22:56.420 "name": "Nvme$subsystem", 00:22:56.420 "trtype": "$TEST_TRANSPORT", 00:22:56.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.420 "adrfam": "ipv4", 00:22:56.420 "trsvcid": "$NVMF_PORT", 00:22:56.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.420 "hdgst": ${hdgst:-false}, 00:22:56.420 "ddgst": ${ddgst:-false} 00:22:56.420 }, 00:22:56.420 "method": "bdev_nvme_attach_controller" 00:22:56.420 } 00:22:56.420 EOF 00:22:56.420 )") 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:56.420 [2024-07-25 10:37:59.932512] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:22:56.420 [2024-07-25 10:37:59.932561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3955415 ] 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.420 { 00:22:56.420 "params": { 00:22:56.420 "name": "Nvme$subsystem", 00:22:56.420 "trtype": "$TEST_TRANSPORT", 00:22:56.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.420 "adrfam": "ipv4", 00:22:56.420 "trsvcid": "$NVMF_PORT", 00:22:56.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.420 "hdgst": ${hdgst:-false}, 00:22:56.420 "ddgst": ${ddgst:-false} 00:22:56.420 }, 00:22:56.420 "method": "bdev_nvme_attach_controller" 00:22:56.420 } 00:22:56.420 EOF 00:22:56.420 )") 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.420 { 00:22:56.420 "params": { 00:22:56.420 "name": "Nvme$subsystem", 00:22:56.420 "trtype": "$TEST_TRANSPORT", 00:22:56.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.420 "adrfam": "ipv4", 00:22:56.420 "trsvcid": "$NVMF_PORT", 00:22:56.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.420 "hdgst": ${hdgst:-false}, 00:22:56.420 "ddgst": ${ddgst:-false} 00:22:56.420 }, 00:22:56.420 "method": "bdev_nvme_attach_controller" 00:22:56.420 } 00:22:56.420 EOF 00:22:56.420 )") 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.420 { 00:22:56.420 "params": { 00:22:56.420 "name": "Nvme$subsystem", 00:22:56.420 "trtype": "$TEST_TRANSPORT", 00:22:56.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.420 "adrfam": "ipv4", 00:22:56.420 "trsvcid": "$NVMF_PORT", 00:22:56.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.420 "hdgst": ${hdgst:-false}, 00:22:56.420 "ddgst": ${ddgst:-false} 00:22:56.420 }, 00:22:56.420 "method": "bdev_nvme_attach_controller" 00:22:56.420 } 00:22:56.420 EOF 00:22:56.420 )") 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.420 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.420 { 00:22:56.420 "params": { 00:22:56.420 "name": "Nvme$subsystem", 00:22:56.420 "trtype": "$TEST_TRANSPORT", 00:22:56.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.420 "adrfam": "ipv4", 00:22:56.420 "trsvcid": "$NVMF_PORT", 00:22:56.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.420 "hdgst": ${hdgst:-false}, 00:22:56.420 "ddgst": ${ddgst:-false} 00:22:56.421 }, 00:22:56.421 "method": "bdev_nvme_attach_controller" 00:22:56.421 } 00:22:56.421 EOF 00:22:56.421 )") 00:22:56.421 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:56.421 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.421 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:56.421 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:56.421 10:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:56.421 "params": { 00:22:56.421 "name": "Nvme1", 00:22:56.421 "trtype": "tcp", 00:22:56.421 "traddr": "10.0.0.2", 00:22:56.421 "adrfam": "ipv4", 00:22:56.421 "trsvcid": "4420", 00:22:56.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:56.421 "hdgst": false, 00:22:56.421 "ddgst": false 00:22:56.421 }, 00:22:56.421 "method": "bdev_nvme_attach_controller" 00:22:56.421 },{ 00:22:56.421 "params": { 00:22:56.421 "name": "Nvme2", 00:22:56.421 "trtype": "tcp", 00:22:56.421 "traddr": "10.0.0.2", 00:22:56.421 "adrfam": "ipv4", 00:22:56.421 "trsvcid": "4420", 00:22:56.421 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:56.421 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:56.421 "hdgst": false, 00:22:56.421 "ddgst": false 00:22:56.421 }, 00:22:56.421 "method": "bdev_nvme_attach_controller" 00:22:56.421 },{ 00:22:56.421 "params": { 00:22:56.421 "name": "Nvme3", 00:22:56.421 "trtype": "tcp", 00:22:56.421 "traddr": "10.0.0.2", 00:22:56.421 "adrfam": "ipv4", 00:22:56.421 "trsvcid": "4420", 00:22:56.421 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:56.421 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:56.421 "hdgst": false, 00:22:56.421 "ddgst": false 00:22:56.421 }, 00:22:56.421 "method": "bdev_nvme_attach_controller" 00:22:56.421 },{ 00:22:56.421 "params": { 00:22:56.421 "name": "Nvme4", 00:22:56.421 "trtype": "tcp", 00:22:56.421 "traddr": "10.0.0.2", 00:22:56.421 "adrfam": "ipv4", 00:22:56.421 "trsvcid": "4420", 00:22:56.421 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:56.421 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:56.421 "hdgst": false, 00:22:56.421 "ddgst": false 00:22:56.421 }, 00:22:56.421 "method": "bdev_nvme_attach_controller" 00:22:56.421 },{ 00:22:56.421 "params": { 00:22:56.421 "name": "Nvme5", 00:22:56.421 "trtype": "tcp", 00:22:56.421 "traddr": "10.0.0.2", 00:22:56.421 "adrfam": "ipv4", 00:22:56.421 "trsvcid": "4420", 00:22:56.421 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:56.421 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:56.421 "hdgst": false, 00:22:56.421 "ddgst": false 00:22:56.421 }, 00:22:56.421 "method": "bdev_nvme_attach_controller" 00:22:56.421 },{ 00:22:56.421 "params": { 00:22:56.421 "name": "Nvme6", 00:22:56.421 "trtype": "tcp", 00:22:56.421 "traddr": "10.0.0.2", 00:22:56.421 "adrfam": "ipv4", 00:22:56.421 "trsvcid": "4420", 00:22:56.421 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:56.421 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:56.421 "hdgst": false, 00:22:56.421 "ddgst": false 00:22:56.421 }, 00:22:56.421 "method": "bdev_nvme_attach_controller" 00:22:56.421 },{ 00:22:56.421 "params": { 00:22:56.421 "name": "Nvme7", 00:22:56.421 "trtype": "tcp", 00:22:56.421 "traddr": "10.0.0.2", 00:22:56.421 "adrfam": "ipv4", 00:22:56.421 "trsvcid": "4420", 00:22:56.421 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:56.421 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:56.421 "hdgst": false, 00:22:56.421 "ddgst": false 00:22:56.421 }, 00:22:56.421 "method": "bdev_nvme_attach_controller" 00:22:56.421 },{ 00:22:56.421 "params": { 00:22:56.421 "name": "Nvme8", 00:22:56.421 "trtype": "tcp", 00:22:56.421 "traddr": "10.0.0.2", 00:22:56.421 "adrfam": "ipv4", 00:22:56.421 "trsvcid": "4420", 00:22:56.421 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:56.421 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:56.421 "hdgst": false, 00:22:56.421 "ddgst": false 00:22:56.421 }, 00:22:56.421 "method": "bdev_nvme_attach_controller" 00:22:56.421 },{ 00:22:56.421 "params": { 00:22:56.421 "name": "Nvme9", 00:22:56.421 "trtype": "tcp", 00:22:56.421 "traddr": "10.0.0.2", 00:22:56.421 "adrfam": "ipv4", 00:22:56.421 "trsvcid": "4420", 00:22:56.421 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:56.421 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:56.421 "hdgst": false, 00:22:56.421 "ddgst": false 00:22:56.421 }, 00:22:56.421 "method": "bdev_nvme_attach_controller" 00:22:56.421 },{ 00:22:56.421 "params": { 00:22:56.421 "name": "Nvme10", 00:22:56.421 "trtype": "tcp", 00:22:56.421 "traddr": "10.0.0.2", 00:22:56.421 "adrfam": "ipv4", 00:22:56.421 "trsvcid": "4420", 00:22:56.421 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:56.421 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:56.421 "hdgst": false, 00:22:56.421 "ddgst": false 00:22:56.421 }, 00:22:56.421 "method": "bdev_nvme_attach_controller" 00:22:56.421 }' 00:22:56.421 [2024-07-25 10:38:00.005450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.421 [2024-07-25 10:38:00.084290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.331 Running I/O for 10 seconds... 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:58.331 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:58.331 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:58.331 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:58.591 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:58.591 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:58.591 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.591 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:58.591 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.591 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:58.591 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:58.591 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3955415 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3955415 ']' 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3955415 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3955415 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3955415' 00:22:58.851 killing process with pid 3955415 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3955415 00:22:58.851 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3955415 00:22:58.851 Received shutdown signal, test time was about 0.926868 seconds 00:22:58.851 00:22:58.851 Latency(us) 00:22:58.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.851 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.851 Verification LBA range: start 0x0 length 0x400 00:22:58.851 Nvme1n1 : 0.90 282.94 17.68 0.00 0.00 223917.47 20027.80 208876.34 00:22:58.851 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.851 Verification LBA range: start 0x0 length 0x400 00:22:58.851 Nvme2n1 : 0.89 287.05 17.94 0.00 0.00 216942.18 16567.50 206359.76 00:22:58.851 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.851 Verification LBA range: start 0x0 length 0x400 00:22:58.851 Nvme3n1 : 0.88 289.46 18.09 0.00 0.00 211444.12 19922.94 213909.50 00:22:58.851 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.851 Verification LBA range: start 0x0 length 0x400 00:22:58.851 Nvme4n1 : 0.93 345.48 21.59 0.00 0.00 173972.85 18874.37 197971.15 00:22:58.851 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.851 Verification LBA range: start 0x0 length 0x400 00:22:58.851 Nvme5n1 : 0.92 279.68 17.48 0.00 0.00 211675.75 18140.36 210554.06 00:22:58.851 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.851 Verification LBA range: start 0x0 length 0x400 00:22:58.851 Nvme6n1 : 0.91 281.32 17.58 0.00 0.00 206506.80 16986.93 206359.76 00:22:58.851 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.851 Verification LBA range: start 0x0 length 0x400 00:22:58.851 Nvme7n1 : 0.92 279.13 17.45 0.00 0.00 204685.52 18350.08 206359.76 00:22:58.851 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.851 Verification LBA range: start 0x0 length 0x400 00:22:58.851 Nvme8n1 : 0.90 284.75 17.80 0.00 0.00 196290.15 16672.36 204682.04 00:22:58.851 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.851 Verification LBA range: start 0x0 length 0x400 00:22:58.851 Nvme9n1 : 0.92 277.92 17.37 0.00 0.00 198204.21 17511.22 218103.81 00:22:58.851 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.851 Verification LBA range: start 0x0 length 0x400 00:22:58.852 Nvme10n1 : 0.92 277.25 17.33 0.00 0.00 195004.21 18245.22 236558.75 00:22:58.852 =================================================================================================================== 00:22:58.852 Total : 2885.00 180.31 0.00 0.00 203135.27 16567.50 236558.75 00:22:59.111 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:00.047 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3955104 00:23:00.047 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:00.047 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:00.047 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:00.047 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:00.047 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:00.047 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:00.047 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:00.047 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:00.047 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:00.047 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:00.047 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:00.047 rmmod nvme_tcp 00:23:00.047 rmmod nvme_fabrics 00:23:00.307 rmmod nvme_keyring 00:23:00.307 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:00.307 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:00.307 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:00.307 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3955104 ']' 00:23:00.307 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3955104 00:23:00.307 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3955104 ']' 00:23:00.307 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3955104 00:23:00.307 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:00.307 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:00.307 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3955104 00:23:00.307 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:00.307 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:00.307 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3955104' 00:23:00.307 killing process with pid 3955104 00:23:00.307 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3955104 00:23:00.307 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3955104 00:23:00.568 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:00.568 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:00.568 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:00.568 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:00.568 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:00.568 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.568 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.568 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:03.109 00:23:03.109 real 0m8.267s 00:23:03.109 user 0m24.910s 00:23:03.109 sys 0m1.658s 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.109 ************************************ 00:23:03.109 END TEST nvmf_shutdown_tc2 00:23:03.109 ************************************ 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:03.109 ************************************ 00:23:03.109 START TEST nvmf_shutdown_tc3 00:23:03.109 ************************************ 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:03.109 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:03.110 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:03.110 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:03.110 Found net devices under 0000:af:00.0: cvl_0_0 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:03.110 Found net devices under 0000:af:00.1: cvl_0_1 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:03.110 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:03.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:23:03.111 00:23:03.111 --- 10.0.0.2 ping statistics --- 00:23:03.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.111 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:03.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:23:03.111 00:23:03.111 --- 10.0.0.1 ping statistics --- 00:23:03.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.111 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3956606 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3956606 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3956606 ']' 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:03.111 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.370 [2024-07-25 10:38:06.835432] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:23:03.370 [2024-07-25 10:38:06.835478] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.370 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.371 [2024-07-25 10:38:06.907924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:03.371 [2024-07-25 10:38:06.979441] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.371 [2024-07-25 10:38:06.979483] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.371 [2024-07-25 10:38:06.979493] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.371 [2024-07-25 10:38:06.979502] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.371 [2024-07-25 10:38:06.979509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.371 [2024-07-25 10:38:06.979619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.371 [2024-07-25 10:38:06.979708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:03.371 [2024-07-25 10:38:06.979821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.371 [2024-07-25 10:38:06.979822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:03.971 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:03.971 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:03.971 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:03.971 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:03.971 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:04.231 [2024-07-25 10:38:07.691928] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:04.231 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.232 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:04.232 Malloc1 00:23:04.232 [2024-07-25 10:38:07.802855] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.232 Malloc2 00:23:04.232 Malloc3 00:23:04.232 Malloc4 00:23:04.491 Malloc5 00:23:04.491 Malloc6 00:23:04.491 Malloc7 00:23:04.491 Malloc8 00:23:04.491 Malloc9 00:23:04.491 Malloc10 00:23:04.491 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.491 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:04.491 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:04.491 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3956919 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3956919 /var/tmp/bdevperf.sock 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3956919 ']' 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:04.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.752 { 00:23:04.752 "params": { 00:23:04.752 "name": "Nvme$subsystem", 00:23:04.752 "trtype": "$TEST_TRANSPORT", 00:23:04.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.752 "adrfam": "ipv4", 00:23:04.752 "trsvcid": "$NVMF_PORT", 00:23:04.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.752 "hdgst": ${hdgst:-false}, 00:23:04.752 "ddgst": ${ddgst:-false} 00:23:04.752 }, 00:23:04.752 "method": "bdev_nvme_attach_controller" 00:23:04.752 } 00:23:04.752 EOF 00:23:04.752 )") 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.752 { 00:23:04.752 "params": { 00:23:04.752 "name": "Nvme$subsystem", 00:23:04.752 "trtype": "$TEST_TRANSPORT", 00:23:04.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.752 "adrfam": "ipv4", 00:23:04.752 "trsvcid": "$NVMF_PORT", 00:23:04.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.752 "hdgst": ${hdgst:-false}, 00:23:04.752 "ddgst": ${ddgst:-false} 00:23:04.752 }, 00:23:04.752 "method": "bdev_nvme_attach_controller" 00:23:04.752 } 00:23:04.752 EOF 00:23:04.752 )") 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.752 { 00:23:04.752 "params": { 00:23:04.752 "name": "Nvme$subsystem", 00:23:04.752 "trtype": "$TEST_TRANSPORT", 00:23:04.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.752 "adrfam": "ipv4", 00:23:04.752 "trsvcid": "$NVMF_PORT", 00:23:04.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.752 "hdgst": ${hdgst:-false}, 00:23:04.752 "ddgst": ${ddgst:-false} 00:23:04.752 }, 00:23:04.752 "method": "bdev_nvme_attach_controller" 00:23:04.752 } 00:23:04.752 EOF 00:23:04.752 )") 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.752 { 00:23:04.752 "params": { 00:23:04.752 "name": "Nvme$subsystem", 00:23:04.752 "trtype": "$TEST_TRANSPORT", 00:23:04.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.752 "adrfam": "ipv4", 00:23:04.752 "trsvcid": "$NVMF_PORT", 00:23:04.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.752 "hdgst": ${hdgst:-false}, 00:23:04.752 "ddgst": ${ddgst:-false} 00:23:04.752 }, 00:23:04.752 "method": "bdev_nvme_attach_controller" 00:23:04.752 } 00:23:04.752 EOF 00:23:04.752 )") 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.752 { 00:23:04.752 "params": { 00:23:04.752 "name": "Nvme$subsystem", 00:23:04.752 "trtype": "$TEST_TRANSPORT", 00:23:04.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.752 "adrfam": "ipv4", 00:23:04.752 "trsvcid": "$NVMF_PORT", 00:23:04.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.752 "hdgst": ${hdgst:-false}, 00:23:04.752 "ddgst": ${ddgst:-false} 00:23:04.752 }, 00:23:04.752 "method": "bdev_nvme_attach_controller" 00:23:04.752 } 00:23:04.752 EOF 00:23:04.752 )") 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.752 [2024-07-25 10:38:08.278053] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:23:04.752 [2024-07-25 10:38:08.278103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3956919 ] 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.752 { 00:23:04.752 "params": { 00:23:04.752 "name": "Nvme$subsystem", 00:23:04.752 "trtype": "$TEST_TRANSPORT", 00:23:04.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.752 "adrfam": "ipv4", 00:23:04.752 "trsvcid": "$NVMF_PORT", 00:23:04.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.752 "hdgst": ${hdgst:-false}, 00:23:04.752 "ddgst": ${ddgst:-false} 00:23:04.752 }, 00:23:04.752 "method": "bdev_nvme_attach_controller" 00:23:04.752 } 00:23:04.752 EOF 00:23:04.752 )") 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.752 { 00:23:04.752 "params": { 00:23:04.752 "name": "Nvme$subsystem", 00:23:04.752 "trtype": "$TEST_TRANSPORT", 00:23:04.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.752 "adrfam": "ipv4", 00:23:04.752 "trsvcid": "$NVMF_PORT", 00:23:04.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.752 "hdgst": ${hdgst:-false}, 00:23:04.752 "ddgst": ${ddgst:-false} 00:23:04.752 }, 00:23:04.752 "method": "bdev_nvme_attach_controller" 00:23:04.752 } 00:23:04.752 EOF 00:23:04.752 )") 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.752 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.752 { 00:23:04.752 "params": { 00:23:04.752 "name": "Nvme$subsystem", 00:23:04.753 "trtype": "$TEST_TRANSPORT", 00:23:04.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.753 "adrfam": "ipv4", 00:23:04.753 "trsvcid": "$NVMF_PORT", 00:23:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.753 "hdgst": ${hdgst:-false}, 00:23:04.753 "ddgst": ${ddgst:-false} 00:23:04.753 }, 00:23:04.753 "method": "bdev_nvme_attach_controller" 00:23:04.753 } 00:23:04.753 EOF 00:23:04.753 )") 00:23:04.753 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:04.753 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.753 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.753 { 00:23:04.753 "params": { 00:23:04.753 "name": "Nvme$subsystem", 00:23:04.753 "trtype": "$TEST_TRANSPORT", 00:23:04.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.753 "adrfam": "ipv4", 00:23:04.753 "trsvcid": "$NVMF_PORT", 00:23:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.753 "hdgst": ${hdgst:-false}, 00:23:04.753 "ddgst": ${ddgst:-false} 00:23:04.753 }, 00:23:04.753 "method": "bdev_nvme_attach_controller" 00:23:04.753 } 00:23:04.753 EOF 00:23:04.753 )") 00:23:04.753 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:04.753 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.753 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.753 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.753 { 00:23:04.753 "params": { 00:23:04.753 "name": "Nvme$subsystem", 00:23:04.753 "trtype": "$TEST_TRANSPORT", 00:23:04.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.753 "adrfam": "ipv4", 00:23:04.753 "trsvcid": "$NVMF_PORT", 00:23:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.753 "hdgst": ${hdgst:-false}, 00:23:04.753 "ddgst": ${ddgst:-false} 00:23:04.753 }, 00:23:04.753 "method": "bdev_nvme_attach_controller" 00:23:04.753 } 00:23:04.753 EOF 00:23:04.753 )") 00:23:04.753 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:04.753 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:04.753 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:04.753 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:04.753 "params": { 00:23:04.753 "name": "Nvme1", 00:23:04.753 "trtype": "tcp", 00:23:04.753 "traddr": "10.0.0.2", 00:23:04.753 "adrfam": "ipv4", 00:23:04.753 "trsvcid": "4420", 00:23:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:04.753 "hdgst": false, 00:23:04.753 "ddgst": false 00:23:04.753 }, 00:23:04.753 "method": "bdev_nvme_attach_controller" 00:23:04.753 },{ 00:23:04.753 "params": { 00:23:04.753 "name": "Nvme2", 00:23:04.753 "trtype": "tcp", 00:23:04.753 "traddr": "10.0.0.2", 00:23:04.753 "adrfam": "ipv4", 00:23:04.753 "trsvcid": "4420", 00:23:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:04.753 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:04.753 "hdgst": false, 00:23:04.753 "ddgst": false 00:23:04.753 }, 00:23:04.753 "method": "bdev_nvme_attach_controller" 00:23:04.753 },{ 00:23:04.753 "params": { 00:23:04.753 "name": "Nvme3", 00:23:04.753 "trtype": "tcp", 00:23:04.753 "traddr": "10.0.0.2", 00:23:04.753 "adrfam": "ipv4", 00:23:04.753 "trsvcid": "4420", 00:23:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:04.753 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:04.753 "hdgst": false, 00:23:04.753 "ddgst": false 00:23:04.753 }, 00:23:04.753 "method": "bdev_nvme_attach_controller" 00:23:04.753 },{ 00:23:04.753 "params": { 00:23:04.753 "name": "Nvme4", 00:23:04.753 "trtype": "tcp", 00:23:04.753 "traddr": "10.0.0.2", 00:23:04.753 "adrfam": "ipv4", 00:23:04.753 "trsvcid": "4420", 00:23:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:04.753 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:04.753 "hdgst": false, 00:23:04.753 "ddgst": false 00:23:04.753 }, 00:23:04.753 "method": "bdev_nvme_attach_controller" 00:23:04.753 },{ 00:23:04.753 "params": { 00:23:04.753 "name": "Nvme5", 00:23:04.753 "trtype": "tcp", 00:23:04.753 "traddr": "10.0.0.2", 00:23:04.753 "adrfam": "ipv4", 00:23:04.753 "trsvcid": "4420", 00:23:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:04.753 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:04.753 "hdgst": false, 00:23:04.753 "ddgst": false 00:23:04.753 }, 00:23:04.753 "method": "bdev_nvme_attach_controller" 00:23:04.753 },{ 00:23:04.753 "params": { 00:23:04.753 "name": "Nvme6", 00:23:04.753 "trtype": "tcp", 00:23:04.753 "traddr": "10.0.0.2", 00:23:04.753 "adrfam": "ipv4", 00:23:04.753 "trsvcid": "4420", 00:23:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:04.753 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:04.753 "hdgst": false, 00:23:04.753 "ddgst": false 00:23:04.753 }, 00:23:04.753 "method": "bdev_nvme_attach_controller" 00:23:04.753 },{ 00:23:04.753 "params": { 00:23:04.753 "name": "Nvme7", 00:23:04.753 "trtype": "tcp", 00:23:04.753 "traddr": "10.0.0.2", 00:23:04.753 "adrfam": "ipv4", 00:23:04.753 "trsvcid": "4420", 00:23:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:04.753 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:04.753 "hdgst": false, 00:23:04.753 "ddgst": false 00:23:04.753 }, 00:23:04.753 "method": "bdev_nvme_attach_controller" 00:23:04.753 },{ 00:23:04.753 "params": { 00:23:04.753 "name": "Nvme8", 00:23:04.753 "trtype": "tcp", 00:23:04.753 "traddr": "10.0.0.2", 00:23:04.753 "adrfam": "ipv4", 00:23:04.753 "trsvcid": "4420", 00:23:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:04.753 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:04.753 "hdgst": false, 00:23:04.753 "ddgst": false 00:23:04.753 }, 00:23:04.753 "method": "bdev_nvme_attach_controller" 00:23:04.753 },{ 00:23:04.753 "params": { 00:23:04.753 "name": "Nvme9", 00:23:04.753 "trtype": "tcp", 00:23:04.753 "traddr": "10.0.0.2", 00:23:04.753 "adrfam": "ipv4", 00:23:04.753 "trsvcid": "4420", 00:23:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:04.753 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:04.753 "hdgst": false, 00:23:04.753 "ddgst": false 00:23:04.753 }, 00:23:04.753 "method": "bdev_nvme_attach_controller" 00:23:04.753 },{ 00:23:04.753 "params": { 00:23:04.753 "name": "Nvme10", 00:23:04.753 "trtype": "tcp", 00:23:04.753 "traddr": "10.0.0.2", 00:23:04.753 "adrfam": "ipv4", 00:23:04.753 "trsvcid": "4420", 00:23:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:04.753 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:04.753 "hdgst": false, 00:23:04.753 "ddgst": false 00:23:04.753 }, 00:23:04.753 "method": "bdev_nvme_attach_controller" 00:23:04.753 }' 00:23:04.753 [2024-07-25 10:38:08.351001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.753 [2024-07-25 10:38:08.419152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.661 Running I/O for 10 seconds... 00:23:06.661 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:06.661 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:06.661 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:06.661 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.661 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:06.661 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:06.921 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:06.921 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:06.921 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:06.921 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:06.921 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.921 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:06.921 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.921 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:06.921 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:06.921 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3956606 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3956606 ']' 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3956606 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3956606 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3956606' 00:23:07.185 killing process with pid 3956606 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3956606 00:23:07.185 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3956606 00:23:07.185 [2024-07-25 10:38:10.866703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d652d0 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.185 [2024-07-25 10:38:10.868772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.868992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.869138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65790 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.186 [2024-07-25 10:38:10.871718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.871996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95340 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.872990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.873000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.873012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.873021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.873029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.873038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.873047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.873056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.873065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.873074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.873083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.873092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.187 [2024-07-25 10:38:10.873102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.873354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95820 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.188 [2024-07-25 10:38:10.874640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.874801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95b90 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.875961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.875986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.875996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.189 [2024-07-25 10:38:10.876442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.876451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.876460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.876468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.876477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.876485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.876494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.876503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.876513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.876522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.876530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96050 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with [2024-07-25 10:38:10.877502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:23:07.190 id:0 cdw10:00000000 cdw11:00000000 00:23:07.190 [2024-07-25 10:38:10.877526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with [2024-07-25 10:38:10.877534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:07.190 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.190 [2024-07-25 10:38:10.877545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.190 [2024-07-25 10:38:10.877554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.190 [2024-07-25 10:38:10.877563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.190 [2024-07-25 10:38:10.877573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.190 [2024-07-25 10:38:10.877582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-25 10:38:10.877591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:07.190 the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with [2024-07-25 10:38:10.877602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:07.190 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.190 [2024-07-25 10:38:10.877614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c7420 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.190 [2024-07-25 10:38:10.877652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.190 [2024-07-25 10:38:10.877660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.190 [2024-07-25 10:38:10.877662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-25 10:38:10.877672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:07.191 the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with [2024-07-25 10:38:10.877685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:07.191 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.877696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.877706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.877720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.877729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.877739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a4c30 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.877776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.877785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-25 10:38:10.877794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:07.191 the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-25 10:38:10.877805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with [2024-07-25 10:38:10.877816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:23:07.191 id:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.877829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with [2024-07-25 10:38:10.877830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:07.191 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.877840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.877849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.877859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b39a0 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.877896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.877905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96510 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.877920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.877930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.877940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.877951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.877960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.877969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469620 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.877996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.878007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.878016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.878026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.878037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.878046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.878055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.878064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.878073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137dbd0 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.878107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.878118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.878128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.878137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.878146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.878156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.878166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.878175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.878184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda7610 is same with the state(5) to be set 00:23:07.191 [2024-07-25 10:38:10.878211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.878221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.191 [2024-07-25 10:38:10.878231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.191 [2024-07-25 10:38:10.878240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.192 [2024-07-25 10:38:10.878250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.192 [2024-07-25 10:38:10.878259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.192 [2024-07-25 10:38:10.878268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.192 [2024-07-25 10:38:10.878277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.192 [2024-07-25 10:38:10.878286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9190 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.192 [2024-07-25 10:38:10.878321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.192 [2024-07-25 10:38:10.878332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.192 [2024-07-25 10:38:10.878343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.192 [2024-07-25 10:38:10.878353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.192 [2024-07-25 10:38:10.878362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.192 [2024-07-25 10:38:10.878371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.192 [2024-07-25 10:38:10.878381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.192 [2024-07-25 10:38:10.878390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d1340 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.192 [2024-07-25 10:38:10.878427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.192 [2024-07-25 10:38:10.878436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.192 [2024-07-25 10:38:10.878446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.192 [2024-07-25 10:38:10.878458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.192 [2024-07-25 10:38:10.878467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.192 [2024-07-25 10:38:10.878477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.192 [2024-07-25 10:38:10.878486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.192 [2024-07-25 10:38:10.878486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138a460 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.192 [2024-07-25 10:38:10.878997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.193 [2024-07-25 10:38:10.879006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.193 [2024-07-25 10:38:10.879015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.193 [2024-07-25 10:38:10.879024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.193 [2024-07-25 10:38:10.879033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b969d0 is same with the state(5) to be set 00:23:07.193 [2024-07-25 10:38:10.879354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.193 [2024-07-25 10:38:10.879877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.193 [2024-07-25 10:38:10.879887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.879898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.879908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.879918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.879927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.879938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.879947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.879957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.879966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.879977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.879986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.879996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.194 [2024-07-25 10:38:10.880638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.194 [2024-07-25 10:38:10.880706] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14096f0 was disconnected and freed. reset controller. 00:23:07.195 [2024-07-25 10:38:10.881054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.195 [2024-07-25 10:38:10.881668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.195 [2024-07-25 10:38:10.881678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.196 [2024-07-25 10:38:10.881688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.196 [2024-07-25 10:38:10.881697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.196 [2024-07-25 10:38:10.881708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.196 [2024-07-25 10:38:10.881723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.196 [2024-07-25 10:38:10.881734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.196 [2024-07-25 10:38:10.881743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.196 [2024-07-25 10:38:10.881753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.896705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.896753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.896766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.896779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.896795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.896808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.896819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.896832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.896843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.896857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.896868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.896881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.896892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.896904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.896915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.896927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.896939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.896951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.896963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.896975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.896986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.896999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.897009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.897022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.897034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.897047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.897058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.897070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.897081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.897101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.897113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.897126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.897136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.897149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.897161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.468 [2024-07-25 10:38:10.897174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.468 [2024-07-25 10:38:10.897185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.897198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.897209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.897222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.897233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.897246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.897257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.897271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.897281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.897294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.897305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.897317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.897328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.897341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.897352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.897364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.897375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.897388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.897401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.897414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.897425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.897438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.897449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.898526] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x135dec0 was disconnected and freed. reset controller. 00:23:07.469 [2024-07-25 10:38:10.898617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c7420 (9): Bad file descriptor 00:23:07.469 [2024-07-25 10:38:10.898648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a4c30 (9): Bad file descriptor 00:23:07.469 [2024-07-25 10:38:10.898672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b39a0 (9): Bad file descriptor 00:23:07.469 [2024-07-25 10:38:10.898692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1469620 (9): Bad file descriptor 00:23:07.469 [2024-07-25 10:38:10.898712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x137dbd0 (9): Bad file descriptor 00:23:07.469 [2024-07-25 10:38:10.898774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.469 [2024-07-25 10:38:10.898790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.898804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.469 [2024-07-25 10:38:10.898817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.898830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.469 [2024-07-25 10:38:10.898843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.898856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.469 [2024-07-25 10:38:10.898869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.898882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1462d90 is same with the state(5) to be set 00:23:07.469 [2024-07-25 10:38:10.898906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda7610 (9): Bad file descriptor 00:23:07.469 [2024-07-25 10:38:10.898931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9190 (9): Bad file descriptor 00:23:07.469 [2024-07-25 10:38:10.898954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d1340 (9): Bad file descriptor 00:23:07.469 [2024-07-25 10:38:10.898979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138a460 (9): Bad file descriptor 00:23:07.469 [2024-07-25 10:38:10.899035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.469 [2024-07-25 10:38:10.899541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.469 [2024-07-25 10:38:10.899555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.899583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.899611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.899641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.899668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.899696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.899730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.899757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.899787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.899814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.899842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.899870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.899897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.899925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.899952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.899979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.899994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.470 [2024-07-25 10:38:10.900665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.470 [2024-07-25 10:38:10.900680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.900693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.900708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.900724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.900739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.900752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.900766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.900780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.900794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.900807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.900822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.900837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.900915] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1364970 was disconnected and freed. reset controller. 00:23:07.471 [2024-07-25 10:38:10.903916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:07.471 [2024-07-25 10:38:10.905540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:07.471 [2024-07-25 10:38:10.905803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.471 [2024-07-25 10:38:10.905828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b39a0 with addr=10.0.0.2, port=4420 00:23:07.471 [2024-07-25 10:38:10.905843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b39a0 is same with the state(5) to be set 00:23:07.471 [2024-07-25 10:38:10.905910] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:07.471 [2024-07-25 10:38:10.906254] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:07.471 [2024-07-25 10:38:10.906311] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:07.471 [2024-07-25 10:38:10.906364] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:07.471 [2024-07-25 10:38:10.906425] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:07.471 [2024-07-25 10:38:10.906836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:07.471 [2024-07-25 10:38:10.907051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.471 [2024-07-25 10:38:10.907072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a4c30 with addr=10.0.0.2, port=4420 00:23:07.471 [2024-07-25 10:38:10.907086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a4c30 is same with the state(5) to be set 00:23:07.471 [2024-07-25 10:38:10.907103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b39a0 (9): Bad file descriptor 00:23:07.471 [2024-07-25 10:38:10.907511] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:07.471 [2024-07-25 10:38:10.907570] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:07.471 [2024-07-25 10:38:10.907785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.471 [2024-07-25 10:38:10.907801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1469620 with addr=10.0.0.2, port=4420 00:23:07.471 [2024-07-25 10:38:10.907813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469620 is same with the state(5) to be set 00:23:07.471 [2024-07-25 10:38:10.907825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a4c30 (9): Bad file descriptor 00:23:07.471 [2024-07-25 10:38:10.907838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:07.471 [2024-07-25 10:38:10.907848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:07.471 [2024-07-25 10:38:10.907858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:07.471 [2024-07-25 10:38:10.907946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.471 [2024-07-25 10:38:10.907959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1469620 (9): Bad file descriptor 00:23:07.471 [2024-07-25 10:38:10.907970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:07.471 [2024-07-25 10:38:10.907979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:07.471 [2024-07-25 10:38:10.907989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:07.471 [2024-07-25 10:38:10.908054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.471 [2024-07-25 10:38:10.908064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:07.471 [2024-07-25 10:38:10.908074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:07.471 [2024-07-25 10:38:10.908083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:07.471 [2024-07-25 10:38:10.908128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.471 [2024-07-25 10:38:10.908627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1462d90 (9): Bad file descriptor 00:23:07.471 [2024-07-25 10:38:10.908774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.908789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.908806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.908816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.908828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.908837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.908849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.908859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.908871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.908881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.908893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.908902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.908914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.908924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.908935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.908945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.908956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.908966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.908978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.908987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.908999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.909012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.909023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.909034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.909045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.909055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.909066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.909076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.909088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.909098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.909109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.909119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.909131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.909140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.909152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.471 [2024-07-25 10:38:10.909162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.471 [2024-07-25 10:38:10.909173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.472 [2024-07-25 10:38:10.909954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.472 [2024-07-25 10:38:10.909964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.909976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.909985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.909997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.910007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.910018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.910028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.910039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.910049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.910061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.910070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.910081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.910093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.910104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.910114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.910125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.910136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.910147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.910157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.910168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1408240 is same with the state(5) to be set 00:23:07.473 [2024-07-25 10:38:10.911207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.473 [2024-07-25 10:38:10.911648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.473 [2024-07-25 10:38:10.911658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.911674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.911684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.911696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.911706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.911722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.911732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.911743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.911753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.911765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.911775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.911787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.911796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.911808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.911818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.911829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.911839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.911851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.911860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.911873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.911883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.911895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.911905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.911917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.911927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.911939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.911950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.911962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.911972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.911983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.911993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.474 [2024-07-25 10:38:10.912510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.474 [2024-07-25 10:38:10.912522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.912531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.912543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.912553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.912564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.912574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.912586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.912596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.912607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129df90 is same with the state(5) to be set 00:23:07.475 [2024-07-25 10:38:10.913649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.913676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.913698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.913722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.913744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.913766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.913787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.913811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.913833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.913854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.913876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.913897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.913918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.913939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.913961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.913982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.913992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.475 [2024-07-25 10:38:10.914402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.475 [2024-07-25 10:38:10.914413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.914982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.914993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.915003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.915015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.915025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.915035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129f3b0 is same with the state(5) to be set 00:23:07.476 [2024-07-25 10:38:10.916090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.916105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.916118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.916128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.916140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.916150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.916161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.916171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.916183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.916193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.916207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.916216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.916228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.916238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.916249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.916259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.916270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.916280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.916292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.476 [2024-07-25 10:38:10.916302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.476 [2024-07-25 10:38:10.916313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.477 [2024-07-25 10:38:10.916801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.477 [2024-07-25 10:38:10.916813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.478 [2024-07-25 10:38:10.916823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.478 [2024-07-25 10:38:10.916834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.478 [2024-07-25 10:38:10.916844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.478 [2024-07-25 10:38:10.916855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.478 [2024-07-25 10:38:10.916865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.478 [2024-07-25 10:38:10.916876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.478 [2024-07-25 10:38:10.916886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.478 [2024-07-25 10:38:10.916898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.478 [2024-07-25 10:38:10.916908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.478 [2024-07-25 10:38:10.916919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.478 [2024-07-25 10:38:10.916929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.478 [2024-07-25 10:38:10.916940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.478 [2024-07-25 10:38:10.916950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.478 [2024-07-25 10:38:10.916961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.478 [2024-07-25 10:38:10.916971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.478 [2024-07-25 10:38:10.916982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.478 [2024-07-25 10:38:10.916992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.478 [2024-07-25 10:38:10.917003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.478 [2024-07-25 10:38:10.917015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.478 [2024-07-25 10:38:10.917026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.478 [2024-07-25 10:38:10.917036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.478 [2024-07-25 10:38:10.917048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.917468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.917479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a08a0 is same with the state(5) to be set 00:23:07.479 [2024-07-25 10:38:10.918492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.918506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.918519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.918528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.918539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.918548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.918561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.918570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.918581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.918590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.918601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.918610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.918620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.918630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.918640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.918649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.918660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.918669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.918680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.918689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.918700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.918709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.918723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.918732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.918742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.918752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.918762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.918772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.918782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.479 [2024-07-25 10:38:10.918791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.479 [2024-07-25 10:38:10.918802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.918815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.918826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.918835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.918846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.918855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.918866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.918875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.918886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.918895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.918906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.918915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.918926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.918935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.918946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.918955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.918966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.918975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.918986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.918995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.480 [2024-07-25 10:38:10.919561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.480 [2024-07-25 10:38:10.919570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.919580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.919590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.919600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.919609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.919620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.919628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.919639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.919648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.919659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.919667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.919678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.919687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.919698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.919707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.919725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.919734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.919744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.919754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.919764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.919773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.919783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd2310 is same with the state(5) to be set 00:23:07.481 [2024-07-25 10:38:10.920740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.920753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.920768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.920777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.920788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.920797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.920808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.920817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.920828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.920837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.920847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.920856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.920867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.920876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.920887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.920896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.920906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.920915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.920926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.920935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.920945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.920954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.920965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.920974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.920984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.920994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.921004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.921015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.921025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.921034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.921045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.921054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.921064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.921073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.921084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.921093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.921103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.921112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.921123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.921132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.921142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.921151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.921162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.921171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.921182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.921190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.921201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.921210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.481 [2024-07-25 10:38:10.921220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.481 [2024-07-25 10:38:10.921229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.482 [2024-07-25 10:38:10.921966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.482 [2024-07-25 10:38:10.921976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.921986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.921996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.922005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.922016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d79d50 is same with the state(5) to be set 00:23:07.483 [2024-07-25 10:38:10.922954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:07.483 [2024-07-25 10:38:10.922972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:07.483 [2024-07-25 10:38:10.922983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:07.483 [2024-07-25 10:38:10.922993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:07.483 [2024-07-25 10:38:10.923076] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:07.483 [2024-07-25 10:38:10.923093] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:07.483 [2024-07-25 10:38:10.923157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:07.483 [2024-07-25 10:38:10.923169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:07.483 [2024-07-25 10:38:10.923541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.483 [2024-07-25 10:38:10.923556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9190 with addr=10.0.0.2, port=4420 00:23:07.483 [2024-07-25 10:38:10.923566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9190 is same with the state(5) to be set 00:23:07.483 [2024-07-25 10:38:10.923863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.483 [2024-07-25 10:38:10.923876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c7420 with addr=10.0.0.2, port=4420 00:23:07.483 [2024-07-25 10:38:10.923885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c7420 is same with the state(5) to be set 00:23:07.483 [2024-07-25 10:38:10.924121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.483 [2024-07-25 10:38:10.924134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12d1340 with addr=10.0.0.2, port=4420 00:23:07.483 [2024-07-25 10:38:10.924143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d1340 is same with the state(5) to be set 00:23:07.483 [2024-07-25 10:38:10.924455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.483 [2024-07-25 10:38:10.924468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x137dbd0 with addr=10.0.0.2, port=4420 00:23:07.483 [2024-07-25 10:38:10.924477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137dbd0 is same with the state(5) to be set 00:23:07.483 [2024-07-25 10:38:10.925832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.925849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.925863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.925872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.925883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.925892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.925904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.925917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.925928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.925937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.925948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.925957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.925967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.925976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.925987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.925996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.926006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.926015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.926026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.926035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.926045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.926054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.926065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.926074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.926085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.926094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.926104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.926113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.926124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.926133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.926144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.926153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.926166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.926175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.926185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.926194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.483 [2024-07-25 10:38:10.926205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.483 [2024-07-25 10:38:10.926214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.484 [2024-07-25 10:38:10.926967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.484 [2024-07-25 10:38:10.926978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.485 [2024-07-25 10:38:10.926987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.485 [2024-07-25 10:38:10.926997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.485 [2024-07-25 10:38:10.927006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.485 [2024-07-25 10:38:10.927017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.485 [2024-07-25 10:38:10.927026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.485 [2024-07-25 10:38:10.927037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.485 [2024-07-25 10:38:10.927045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.485 [2024-07-25 10:38:10.927056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.485 [2024-07-25 10:38:10.927065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.485 [2024-07-25 10:38:10.927075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.485 [2024-07-25 10:38:10.927085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.485 [2024-07-25 10:38:10.927095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.485 [2024-07-25 10:38:10.927104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.485 [2024-07-25 10:38:10.927114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135c9e0 is same with the state(5) to be set 00:23:07.485 [2024-07-25 10:38:10.928349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:07.485 [2024-07-25 10:38:10.928368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:07.485 [2024-07-25 10:38:10.928380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:07.485 task offset: 25472 on job bdev=Nvme3n1 fails 00:23:07.485 00:23:07.485 Latency(us) 00:23:07.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.485 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.485 Job: Nvme1n1 ended in about 0.93 seconds with error 00:23:07.485 Verification LBA range: start 0x0 length 0x400 00:23:07.485 Nvme1n1 : 0.93 207.18 12.95 69.06 0.00 229541.48 22858.96 261724.57 00:23:07.485 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.485 Job: Nvme2n1 ended in about 0.93 seconds with error 00:23:07.485 Verification LBA range: start 0x0 length 0x400 00:23:07.485 Nvme2n1 : 0.93 212.24 13.27 68.60 0.00 222100.00 17406.36 238236.47 00:23:07.485 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.485 Job: Nvme3n1 ended in about 0.92 seconds with error 00:23:07.485 Verification LBA range: start 0x0 length 0x400 00:23:07.485 Nvme3n1 : 0.92 207.84 12.99 69.28 0.00 221274.32 22858.96 244947.35 00:23:07.485 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.485 Job: Nvme4n1 ended in about 0.94 seconds with error 00:23:07.485 Verification LBA range: start 0x0 length 0x400 00:23:07.485 Nvme4n1 : 0.94 205.28 12.83 68.43 0.00 220318.31 22124.95 266757.73 00:23:07.485 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.485 Job: Nvme5n1 ended in about 0.94 seconds with error 00:23:07.485 Verification LBA range: start 0x0 length 0x400 00:23:07.485 Nvme5n1 : 0.94 136.50 8.53 68.25 0.00 289544.87 24956.11 268435.46 00:23:07.485 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.485 Job: Nvme6n1 ended in about 0.94 seconds with error 00:23:07.485 Verification LBA range: start 0x0 length 0x400 00:23:07.485 Nvme6n1 : 0.94 136.14 8.51 68.07 0.00 285497.21 26004.68 270113.18 00:23:07.485 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.485 Job: Nvme7n1 ended in about 0.94 seconds with error 00:23:07.485 Verification LBA range: start 0x0 length 0x400 00:23:07.485 Nvme7n1 : 0.94 203.72 12.73 67.91 0.00 210842.42 22544.38 219781.53 00:23:07.485 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.485 Job: Nvme8n1 ended in about 0.94 seconds with error 00:23:07.485 Verification LBA range: start 0x0 length 0x400 00:23:07.485 Nvme8n1 : 0.94 203.24 12.70 67.75 0.00 207633.41 20342.37 241591.91 00:23:07.485 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.485 Job: Nvme9n1 ended in about 0.95 seconds with error 00:23:07.485 Verification LBA range: start 0x0 length 0x400 00:23:07.485 Nvme9n1 : 0.95 134.77 8.42 67.38 0.00 273425.20 24012.39 270113.18 00:23:07.485 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.485 Job: Nvme10n1 ended in about 0.93 seconds with error 00:23:07.485 Verification LBA range: start 0x0 length 0x400 00:23:07.485 Nvme10n1 : 0.93 207.46 12.97 69.15 0.00 195235.43 22439.53 218103.81 00:23:07.485 =================================================================================================================== 00:23:07.485 Total : 1854.38 115.90 683.88 0.00 231683.38 17406.36 270113.18 00:23:07.485 [2024-07-25 10:38:10.948771] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:07.485 [2024-07-25 10:38:10.948803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:07.485 [2024-07-25 10:38:10.949232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.485 [2024-07-25 10:38:10.949251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda7610 with addr=10.0.0.2, port=4420 00:23:07.485 [2024-07-25 10:38:10.949263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda7610 is same with the state(5) to be set 00:23:07.485 [2024-07-25 10:38:10.949557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.485 [2024-07-25 10:38:10.949569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138a460 with addr=10.0.0.2, port=4420 00:23:07.485 [2024-07-25 10:38:10.949583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138a460 is same with the state(5) to be set 00:23:07.485 [2024-07-25 10:38:10.949598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9190 (9): Bad file descriptor 00:23:07.485 [2024-07-25 10:38:10.949612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c7420 (9): Bad file descriptor 00:23:07.485 [2024-07-25 10:38:10.949624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d1340 (9): Bad file descriptor 00:23:07.485 [2024-07-25 10:38:10.949635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x137dbd0 (9): Bad file descriptor 00:23:07.485 [2024-07-25 10:38:10.950076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.485 [2024-07-25 10:38:10.950093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b39a0 with addr=10.0.0.2, port=4420 00:23:07.485 [2024-07-25 10:38:10.950103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b39a0 is same with the state(5) to be set 00:23:07.485 [2024-07-25 10:38:10.950400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.485 [2024-07-25 10:38:10.950412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a4c30 with addr=10.0.0.2, port=4420 00:23:07.485 [2024-07-25 10:38:10.950421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a4c30 is same with the state(5) to be set 00:23:07.485 [2024-07-25 10:38:10.950654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.485 [2024-07-25 10:38:10.950666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1469620 with addr=10.0.0.2, port=4420 00:23:07.485 [2024-07-25 10:38:10.950675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469620 is same with the state(5) to be set 00:23:07.485 [2024-07-25 10:38:10.950912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.485 [2024-07-25 10:38:10.950924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1462d90 with addr=10.0.0.2, port=4420 00:23:07.485 [2024-07-25 10:38:10.950933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1462d90 is same with the state(5) to be set 00:23:07.485 [2024-07-25 10:38:10.950945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda7610 (9): Bad file descriptor 00:23:07.485 [2024-07-25 10:38:10.950957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138a460 (9): Bad file descriptor 00:23:07.485 [2024-07-25 10:38:10.950967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:07.485 [2024-07-25 10:38:10.950976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:07.485 [2024-07-25 10:38:10.950986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:07.485 [2024-07-25 10:38:10.950999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:07.485 [2024-07-25 10:38:10.951008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:07.485 [2024-07-25 10:38:10.951016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:07.485 [2024-07-25 10:38:10.951027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:07.485 [2024-07-25 10:38:10.951036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:07.486 [2024-07-25 10:38:10.951044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:07.486 [2024-07-25 10:38:10.951055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:07.486 [2024-07-25 10:38:10.951067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:07.486 [2024-07-25 10:38:10.951075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:07.486 [2024-07-25 10:38:10.951106] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:07.486 [2024-07-25 10:38:10.951119] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:07.486 [2024-07-25 10:38:10.951132] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:07.486 [2024-07-25 10:38:10.951144] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:07.486 [2024-07-25 10:38:10.951157] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:07.486 [2024-07-25 10:38:10.951169] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:07.486 [2024-07-25 10:38:10.951462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.486 [2024-07-25 10:38:10.951474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.486 [2024-07-25 10:38:10.951482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.486 [2024-07-25 10:38:10.951489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.486 [2024-07-25 10:38:10.951498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b39a0 (9): Bad file descriptor 00:23:07.486 [2024-07-25 10:38:10.951510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a4c30 (9): Bad file descriptor 00:23:07.486 [2024-07-25 10:38:10.951521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1469620 (9): Bad file descriptor 00:23:07.486 [2024-07-25 10:38:10.951532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1462d90 (9): Bad file descriptor 00:23:07.486 [2024-07-25 10:38:10.951542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:07.486 [2024-07-25 10:38:10.951550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:07.486 [2024-07-25 10:38:10.951559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:07.486 [2024-07-25 10:38:10.951569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:07.486 [2024-07-25 10:38:10.951578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:07.486 [2024-07-25 10:38:10.951586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:07.486 [2024-07-25 10:38:10.951885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.486 [2024-07-25 10:38:10.951900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.486 [2024-07-25 10:38:10.951908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:07.486 [2024-07-25 10:38:10.951917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:07.486 [2024-07-25 10:38:10.951925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:07.486 [2024-07-25 10:38:10.951936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:07.486 [2024-07-25 10:38:10.951944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:07.486 [2024-07-25 10:38:10.951953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:07.486 [2024-07-25 10:38:10.951963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:07.486 [2024-07-25 10:38:10.951974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:07.486 [2024-07-25 10:38:10.951983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:07.486 [2024-07-25 10:38:10.951994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:07.486 [2024-07-25 10:38:10.952002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:07.486 [2024-07-25 10:38:10.952011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:07.486 [2024-07-25 10:38:10.952045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.486 [2024-07-25 10:38:10.952054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.486 [2024-07-25 10:38:10.952061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.486 [2024-07-25 10:38:10.952069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.745 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:07.745 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:08.684 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3956919 00:23:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3956919) - No such process 00:23:08.684 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:08.684 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:08.684 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:08.684 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:08.684 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:08.684 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:08.684 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:08.684 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:08.684 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:08.684 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:08.684 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:08.684 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:08.684 rmmod nvme_tcp 00:23:08.684 rmmod nvme_fabrics 00:23:08.685 rmmod nvme_keyring 00:23:08.685 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:08.685 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:08.685 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:08.685 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:08.685 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:08.685 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:08.685 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:08.685 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:08.685 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:08.685 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.685 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.685 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.216 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:11.216 00:23:11.216 real 0m8.064s 00:23:11.216 user 0m19.797s 00:23:11.216 sys 0m1.603s 00:23:11.216 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:11.216 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:11.216 ************************************ 00:23:11.216 END TEST nvmf_shutdown_tc3 00:23:11.216 ************************************ 00:23:11.216 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:11.216 00:23:11.216 real 0m32.675s 00:23:11.216 user 1m19.548s 00:23:11.216 sys 0m10.016s 00:23:11.216 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:11.216 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:11.216 ************************************ 00:23:11.216 END TEST nvmf_shutdown 00:23:11.216 ************************************ 00:23:11.216 10:38:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:23:11.216 00:23:11.216 real 11m12.062s 00:23:11.216 user 23m44.098s 00:23:11.216 sys 3m57.648s 00:23:11.216 10:38:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:11.216 10:38:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:11.216 ************************************ 00:23:11.216 END TEST nvmf_target_extra 00:23:11.216 ************************************ 00:23:11.216 10:38:14 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:11.216 10:38:14 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:11.216 10:38:14 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:11.216 10:38:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:11.216 ************************************ 00:23:11.216 START TEST nvmf_host 00:23:11.216 ************************************ 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:11.216 * Looking for test storage... 00:23:11.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:11.216 10:38:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:11.217 10:38:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:11.217 10:38:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:11.217 10:38:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.217 ************************************ 00:23:11.217 START TEST nvmf_multicontroller 00:23:11.217 ************************************ 00:23:11.217 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:11.476 * Looking for test storage... 00:23:11.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:11.476 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.476 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:11.476 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.476 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.476 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.476 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.476 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.476 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.476 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.476 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:11.477 10:38:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:18.052 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:18.052 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.052 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:18.053 Found net devices under 0000:af:00.0: cvl_0_0 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:18.053 Found net devices under 0000:af:00.1: cvl_0_1 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:18.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:23:18.053 00:23:18.053 --- 10.0.0.2 ping statistics --- 00:23:18.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.053 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:23:18.053 00:23:18.053 --- 10.0.0.1 ping statistics --- 00:23:18.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.053 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3961299 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3961299 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3961299 ']' 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.053 10:38:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:18.053 [2024-07-25 10:38:21.617533] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:23:18.053 [2024-07-25 10:38:21.617584] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.053 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.053 [2024-07-25 10:38:21.691956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:18.312 [2024-07-25 10:38:21.765322] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.312 [2024-07-25 10:38:21.765360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.312 [2024-07-25 10:38:21.765370] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.312 [2024-07-25 10:38:21.765379] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.312 [2024-07-25 10:38:21.765386] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.312 [2024-07-25 10:38:21.765430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.312 [2024-07-25 10:38:21.765513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.313 [2024-07-25 10:38:21.765515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.881 [2024-07-25 10:38:22.472148] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.881 Malloc0 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.881 [2024-07-25 10:38:22.538401] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.881 [2024-07-25 10:38:22.546351] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.881 Malloc1 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.881 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:18.882 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.882 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.882 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.882 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:18.882 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.882 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.141 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.141 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:19.141 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.141 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.141 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.141 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:19.141 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.141 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.141 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.141 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3961504 00:23:19.141 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:19.142 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:19.142 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3961504 /var/tmp/bdevperf.sock 00:23:19.142 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3961504 ']' 00:23:19.142 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.142 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.142 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.142 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.142 10:38:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.079 NVMe0n1 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.079 1 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.079 request: 00:23:20.079 { 00:23:20.079 "name": "NVMe0", 00:23:20.079 "trtype": "tcp", 00:23:20.079 "traddr": "10.0.0.2", 00:23:20.079 "adrfam": "ipv4", 00:23:20.079 "trsvcid": "4420", 00:23:20.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.079 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:20.079 "hostaddr": "10.0.0.2", 00:23:20.079 "hostsvcid": "60000", 00:23:20.079 "prchk_reftag": false, 00:23:20.079 "prchk_guard": false, 00:23:20.079 "hdgst": false, 00:23:20.079 "ddgst": false, 00:23:20.079 "method": "bdev_nvme_attach_controller", 00:23:20.079 "req_id": 1 00:23:20.079 } 00:23:20.079 Got JSON-RPC error response 00:23:20.079 response: 00:23:20.079 { 00:23:20.079 "code": -114, 00:23:20.079 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:20.079 } 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:20.079 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.080 request: 00:23:20.080 { 00:23:20.080 "name": "NVMe0", 00:23:20.080 "trtype": "tcp", 00:23:20.080 "traddr": "10.0.0.2", 00:23:20.080 "adrfam": "ipv4", 00:23:20.080 "trsvcid": "4420", 00:23:20.080 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:20.080 "hostaddr": "10.0.0.2", 00:23:20.080 "hostsvcid": "60000", 00:23:20.080 "prchk_reftag": false, 00:23:20.080 "prchk_guard": false, 00:23:20.080 "hdgst": false, 00:23:20.080 "ddgst": false, 00:23:20.080 "method": "bdev_nvme_attach_controller", 00:23:20.080 "req_id": 1 00:23:20.080 } 00:23:20.080 Got JSON-RPC error response 00:23:20.080 response: 00:23:20.080 { 00:23:20.080 "code": -114, 00:23:20.080 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:20.080 } 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.080 request: 00:23:20.080 { 00:23:20.080 "name": "NVMe0", 00:23:20.080 "trtype": "tcp", 00:23:20.080 "traddr": "10.0.0.2", 00:23:20.080 "adrfam": "ipv4", 00:23:20.080 "trsvcid": "4420", 00:23:20.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.080 "hostaddr": "10.0.0.2", 00:23:20.080 "hostsvcid": "60000", 00:23:20.080 "prchk_reftag": false, 00:23:20.080 "prchk_guard": false, 00:23:20.080 "hdgst": false, 00:23:20.080 "ddgst": false, 00:23:20.080 "multipath": "disable", 00:23:20.080 "method": "bdev_nvme_attach_controller", 00:23:20.080 "req_id": 1 00:23:20.080 } 00:23:20.080 Got JSON-RPC error response 00:23:20.080 response: 00:23:20.080 { 00:23:20.080 "code": -114, 00:23:20.080 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:20.080 } 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.080 request: 00:23:20.080 { 00:23:20.080 "name": "NVMe0", 00:23:20.080 "trtype": "tcp", 00:23:20.080 "traddr": "10.0.0.2", 00:23:20.080 "adrfam": "ipv4", 00:23:20.080 "trsvcid": "4420", 00:23:20.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.080 "hostaddr": "10.0.0.2", 00:23:20.080 "hostsvcid": "60000", 00:23:20.080 "prchk_reftag": false, 00:23:20.080 "prchk_guard": false, 00:23:20.080 "hdgst": false, 00:23:20.080 "ddgst": false, 00:23:20.080 "multipath": "failover", 00:23:20.080 "method": "bdev_nvme_attach_controller", 00:23:20.080 "req_id": 1 00:23:20.080 } 00:23:20.080 Got JSON-RPC error response 00:23:20.080 response: 00:23:20.080 { 00:23:20.080 "code": -114, 00:23:20.080 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:20.080 } 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.080 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.339 00:23:20.340 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.340 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.340 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.340 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.340 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.340 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:20.340 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.340 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.340 00:23:20.340 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.340 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.340 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.340 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:20.340 10:38:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.340 10:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.340 10:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:20.340 10:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:21.717 0 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3961504 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3961504 ']' 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3961504 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3961504 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3961504' 00:23:21.717 killing process with pid 3961504 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3961504 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3961504 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:21.717 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:21.717 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:21.717 [2024-07-25 10:38:22.650497] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:23:21.717 [2024-07-25 10:38:22.650549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3961504 ] 00:23:21.717 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.717 [2024-07-25 10:38:22.718687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.717 [2024-07-25 10:38:22.788054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.717 [2024-07-25 10:38:23.993222] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name cb8f4ea6-2385-4407-a711-68b804a26842 already exists 00:23:21.717 [2024-07-25 10:38:23.993253] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:cb8f4ea6-2385-4407-a711-68b804a26842 alias for bdev NVMe1n1 00:23:21.717 [2024-07-25 10:38:23.993263] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:21.717 Running I/O for 1 seconds... 00:23:21.717 00:23:21.717 Latency(us) 00:23:21.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.718 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:21.718 NVMe0n1 : 1.01 24477.57 95.62 0.00 0.00 5212.75 3617.59 10695.48 00:23:21.718 =================================================================================================================== 00:23:21.718 Total : 24477.57 95.62 0.00 0.00 5212.75 3617.59 10695.48 00:23:21.718 Received shutdown signal, test time was about 1.000000 seconds 00:23:21.718 00:23:21.718 Latency(us) 00:23:21.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.718 =================================================================================================================== 00:23:21.718 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:21.718 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:21.718 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:21.718 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:21.718 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:21.718 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:21.718 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:21.976 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:21.976 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:21.976 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:21.976 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:21.976 rmmod nvme_tcp 00:23:21.976 rmmod nvme_fabrics 00:23:21.976 rmmod nvme_keyring 00:23:21.976 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:21.977 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:21.977 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:21.977 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3961299 ']' 00:23:21.977 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3961299 00:23:21.977 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3961299 ']' 00:23:21.977 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3961299 00:23:21.977 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:21.977 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:21.977 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3961299 00:23:21.977 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:21.977 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:21.977 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3961299' 00:23:21.977 killing process with pid 3961299 00:23:21.977 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3961299 00:23:21.977 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3961299 00:23:22.235 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.235 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.235 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.235 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.235 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.235 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.235 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.235 10:38:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.179 10:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:24.179 00:23:24.179 real 0m13.012s 00:23:24.179 user 0m16.578s 00:23:24.179 sys 0m6.074s 00:23:24.179 10:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:24.179 10:38:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.179 ************************************ 00:23:24.179 END TEST nvmf_multicontroller 00:23:24.179 ************************************ 00:23:24.179 10:38:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:24.179 10:38:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:24.179 10:38:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:24.179 10:38:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.438 ************************************ 00:23:24.438 START TEST nvmf_aer 00:23:24.438 ************************************ 00:23:24.438 10:38:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:24.438 * Looking for test storage... 00:23:24.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:24.438 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.438 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:24.438 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.438 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.438 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.438 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.438 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.438 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.438 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.438 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.438 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.438 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.438 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:24.438 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:24.438 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:24.439 10:38:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:31.008 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:31.008 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:31.008 Found net devices under 0000:af:00.0: cvl_0_0 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:31.008 Found net devices under 0000:af:00.1: cvl_0_1 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:31.008 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:31.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:23:31.009 00:23:31.009 --- 10.0.0.2 ping statistics --- 00:23:31.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.009 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:31.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:23:31.009 00:23:31.009 --- 10.0.0.1 ping statistics --- 00:23:31.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.009 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3965630 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3965630 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3965630 ']' 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:31.009 10:38:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.009 [2024-07-25 10:38:34.681156] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:23:31.009 [2024-07-25 10:38:34.681208] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.267 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.267 [2024-07-25 10:38:34.755132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:31.267 [2024-07-25 10:38:34.830068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.267 [2024-07-25 10:38:34.830108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.267 [2024-07-25 10:38:34.830118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.267 [2024-07-25 10:38:34.830126] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.267 [2024-07-25 10:38:34.830150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.267 [2024-07-25 10:38:34.830203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.267 [2024-07-25 10:38:34.830297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.267 [2024-07-25 10:38:34.830380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:31.267 [2024-07-25 10:38:34.830382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.833 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:31.833 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:31.833 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:31.833 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:31.833 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.833 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.833 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:31.833 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.833 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.092 [2024-07-25 10:38:35.542134] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.092 Malloc0 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.092 [2024-07-25 10:38:35.596931] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.092 [ 00:23:32.092 { 00:23:32.092 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:32.092 "subtype": "Discovery", 00:23:32.092 "listen_addresses": [], 00:23:32.092 "allow_any_host": true, 00:23:32.092 "hosts": [] 00:23:32.092 }, 00:23:32.092 { 00:23:32.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.092 "subtype": "NVMe", 00:23:32.092 "listen_addresses": [ 00:23:32.092 { 00:23:32.092 "trtype": "TCP", 00:23:32.092 "adrfam": "IPv4", 00:23:32.092 "traddr": "10.0.0.2", 00:23:32.092 "trsvcid": "4420" 00:23:32.092 } 00:23:32.092 ], 00:23:32.092 "allow_any_host": true, 00:23:32.092 "hosts": [], 00:23:32.092 "serial_number": "SPDK00000000000001", 00:23:32.092 "model_number": "SPDK bdev Controller", 00:23:32.092 "max_namespaces": 2, 00:23:32.092 "min_cntlid": 1, 00:23:32.092 "max_cntlid": 65519, 00:23:32.092 "namespaces": [ 00:23:32.092 { 00:23:32.092 "nsid": 1, 00:23:32.092 "bdev_name": "Malloc0", 00:23:32.092 "name": "Malloc0", 00:23:32.092 "nguid": "7F54E96E3D6843ECBBA9F85104455F37", 00:23:32.092 "uuid": "7f54e96e-3d68-43ec-bba9-f85104455f37" 00:23:32.092 } 00:23:32.092 ] 00:23:32.092 } 00:23:32.092 ] 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3965754 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:32.092 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:32.092 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.352 Malloc1 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.352 Asynchronous Event Request test 00:23:32.352 Attaching to 10.0.0.2 00:23:32.352 Attached to 10.0.0.2 00:23:32.352 Registering asynchronous event callbacks... 00:23:32.352 Starting namespace attribute notice tests for all controllers... 00:23:32.352 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:32.352 aer_cb - Changed Namespace 00:23:32.352 Cleaning up... 00:23:32.352 [ 00:23:32.352 { 00:23:32.352 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:32.352 "subtype": "Discovery", 00:23:32.352 "listen_addresses": [], 00:23:32.352 "allow_any_host": true, 00:23:32.352 "hosts": [] 00:23:32.352 }, 00:23:32.352 { 00:23:32.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.352 "subtype": "NVMe", 00:23:32.352 "listen_addresses": [ 00:23:32.352 { 00:23:32.352 "trtype": "TCP", 00:23:32.352 "adrfam": "IPv4", 00:23:32.352 "traddr": "10.0.0.2", 00:23:32.352 "trsvcid": "4420" 00:23:32.352 } 00:23:32.352 ], 00:23:32.352 "allow_any_host": true, 00:23:32.352 "hosts": [], 00:23:32.352 "serial_number": "SPDK00000000000001", 00:23:32.352 "model_number": "SPDK bdev Controller", 00:23:32.352 "max_namespaces": 2, 00:23:32.352 "min_cntlid": 1, 00:23:32.352 "max_cntlid": 65519, 00:23:32.352 "namespaces": [ 00:23:32.352 { 00:23:32.352 "nsid": 1, 00:23:32.352 "bdev_name": "Malloc0", 00:23:32.352 "name": "Malloc0", 00:23:32.352 "nguid": "7F54E96E3D6843ECBBA9F85104455F37", 00:23:32.352 "uuid": "7f54e96e-3d68-43ec-bba9-f85104455f37" 00:23:32.352 }, 00:23:32.352 { 00:23:32.352 "nsid": 2, 00:23:32.352 "bdev_name": "Malloc1", 00:23:32.352 "name": "Malloc1", 00:23:32.352 "nguid": "27EF1B23D77A4C69A82575BFB22E2B68", 00:23:32.352 "uuid": "27ef1b23-d77a-4c69-a825-75bfb22e2b68" 00:23:32.352 } 00:23:32.352 ] 00:23:32.352 } 00:23:32.352 ] 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3965754 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:32.352 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:32.353 10:38:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:32.353 rmmod nvme_tcp 00:23:32.353 rmmod nvme_fabrics 00:23:32.353 rmmod nvme_keyring 00:23:32.353 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:32.353 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:32.353 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:32.353 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3965630 ']' 00:23:32.353 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3965630 00:23:32.353 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3965630 ']' 00:23:32.353 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3965630 00:23:32.353 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:32.353 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:32.353 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3965630 00:23:32.610 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:32.610 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:32.610 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3965630' 00:23:32.610 killing process with pid 3965630 00:23:32.610 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3965630 00:23:32.610 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3965630 00:23:32.610 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:32.610 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:32.610 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:32.610 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:32.610 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:32.610 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.610 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.610 10:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.143 10:38:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:35.143 00:23:35.143 real 0m10.436s 00:23:35.143 user 0m7.587s 00:23:35.143 sys 0m5.540s 00:23:35.143 10:38:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:35.143 10:38:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.143 ************************************ 00:23:35.143 END TEST nvmf_aer 00:23:35.143 ************************************ 00:23:35.143 10:38:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.144 ************************************ 00:23:35.144 START TEST nvmf_async_init 00:23:35.144 ************************************ 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:35.144 * Looking for test storage... 00:23:35.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8054709425584ad59035f60e6388310f 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:35.144 10:38:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:41.715 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:41.715 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:41.715 Found net devices under 0000:af:00.0: cvl_0_0 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:41.715 Found net devices under 0000:af:00.1: cvl_0_1 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.715 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:41.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:23:41.716 00:23:41.716 --- 10.0.0.2 ping statistics --- 00:23:41.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.716 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:23:41.716 00:23:41.716 --- 10.0.0.1 ping statistics --- 00:23:41.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.716 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3969461 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3969461 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3969461 ']' 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:41.716 10:38:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:41.975 [2024-07-25 10:38:45.430663] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:23:41.975 [2024-07-25 10:38:45.430723] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.975 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.975 [2024-07-25 10:38:45.506334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.975 [2024-07-25 10:38:45.580380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.975 [2024-07-25 10:38:45.580418] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.975 [2024-07-25 10:38:45.580428] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.975 [2024-07-25 10:38:45.580436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.975 [2024-07-25 10:38:45.580444] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.975 [2024-07-25 10:38:45.580481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.542 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:42.542 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:42.542 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:42.542 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:42.542 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.801 [2024-07-25 10:38:46.283453] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.801 null0 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8054709425584ad59035f60e6388310f 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.801 [2024-07-25 10:38:46.323666] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.801 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.060 nvme0n1 00:23:43.060 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.060 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:43.060 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.060 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.060 [ 00:23:43.060 { 00:23:43.060 "name": "nvme0n1", 00:23:43.060 "aliases": [ 00:23:43.060 "80547094-2558-4ad5-9035-f60e6388310f" 00:23:43.060 ], 00:23:43.060 "product_name": "NVMe disk", 00:23:43.060 "block_size": 512, 00:23:43.060 "num_blocks": 2097152, 00:23:43.060 "uuid": "80547094-2558-4ad5-9035-f60e6388310f", 00:23:43.060 "assigned_rate_limits": { 00:23:43.060 "rw_ios_per_sec": 0, 00:23:43.060 "rw_mbytes_per_sec": 0, 00:23:43.060 "r_mbytes_per_sec": 0, 00:23:43.060 "w_mbytes_per_sec": 0 00:23:43.060 }, 00:23:43.060 "claimed": false, 00:23:43.060 "zoned": false, 00:23:43.060 "supported_io_types": { 00:23:43.060 "read": true, 00:23:43.060 "write": true, 00:23:43.060 "unmap": false, 00:23:43.060 "flush": true, 00:23:43.060 "reset": true, 00:23:43.060 "nvme_admin": true, 00:23:43.060 "nvme_io": true, 00:23:43.060 "nvme_io_md": false, 00:23:43.060 "write_zeroes": true, 00:23:43.060 "zcopy": false, 00:23:43.060 "get_zone_info": false, 00:23:43.060 "zone_management": false, 00:23:43.060 "zone_append": false, 00:23:43.060 "compare": true, 00:23:43.060 "compare_and_write": true, 00:23:43.060 "abort": true, 00:23:43.060 "seek_hole": false, 00:23:43.060 "seek_data": false, 00:23:43.060 "copy": true, 00:23:43.060 "nvme_iov_md": false 00:23:43.060 }, 00:23:43.060 "memory_domains": [ 00:23:43.060 { 00:23:43.060 "dma_device_id": "system", 00:23:43.060 "dma_device_type": 1 00:23:43.060 } 00:23:43.060 ], 00:23:43.060 "driver_specific": { 00:23:43.060 "nvme": [ 00:23:43.060 { 00:23:43.060 "trid": { 00:23:43.060 "trtype": "TCP", 00:23:43.060 "adrfam": "IPv4", 00:23:43.060 "traddr": "10.0.0.2", 00:23:43.060 "trsvcid": "4420", 00:23:43.060 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:43.060 }, 00:23:43.060 "ctrlr_data": { 00:23:43.060 "cntlid": 1, 00:23:43.060 "vendor_id": "0x8086", 00:23:43.060 "model_number": "SPDK bdev Controller", 00:23:43.060 "serial_number": "00000000000000000000", 00:23:43.060 "firmware_revision": "24.09", 00:23:43.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:43.060 "oacs": { 00:23:43.060 "security": 0, 00:23:43.060 "format": 0, 00:23:43.060 "firmware": 0, 00:23:43.060 "ns_manage": 0 00:23:43.060 }, 00:23:43.060 "multi_ctrlr": true, 00:23:43.060 "ana_reporting": false 00:23:43.060 }, 00:23:43.060 "vs": { 00:23:43.060 "nvme_version": "1.3" 00:23:43.060 }, 00:23:43.060 "ns_data": { 00:23:43.060 "id": 1, 00:23:43.060 "can_share": true 00:23:43.060 } 00:23:43.060 } 00:23:43.060 ], 00:23:43.060 "mp_policy": "active_passive" 00:23:43.060 } 00:23:43.060 } 00:23:43.060 ] 00:23:43.060 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.060 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:43.060 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.060 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.060 [2024-07-25 10:38:46.592229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:43.060 [2024-07-25 10:38:46.592284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16654d0 (9): Bad file descriptor 00:23:43.060 [2024-07-25 10:38:46.723799] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:43.060 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.060 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:43.060 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.060 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.060 [ 00:23:43.060 { 00:23:43.060 "name": "nvme0n1", 00:23:43.060 "aliases": [ 00:23:43.060 "80547094-2558-4ad5-9035-f60e6388310f" 00:23:43.060 ], 00:23:43.060 "product_name": "NVMe disk", 00:23:43.060 "block_size": 512, 00:23:43.060 "num_blocks": 2097152, 00:23:43.060 "uuid": "80547094-2558-4ad5-9035-f60e6388310f", 00:23:43.060 "assigned_rate_limits": { 00:23:43.060 "rw_ios_per_sec": 0, 00:23:43.060 "rw_mbytes_per_sec": 0, 00:23:43.060 "r_mbytes_per_sec": 0, 00:23:43.060 "w_mbytes_per_sec": 0 00:23:43.060 }, 00:23:43.060 "claimed": false, 00:23:43.060 "zoned": false, 00:23:43.060 "supported_io_types": { 00:23:43.060 "read": true, 00:23:43.060 "write": true, 00:23:43.060 "unmap": false, 00:23:43.060 "flush": true, 00:23:43.060 "reset": true, 00:23:43.060 "nvme_admin": true, 00:23:43.060 "nvme_io": true, 00:23:43.060 "nvme_io_md": false, 00:23:43.060 "write_zeroes": true, 00:23:43.060 "zcopy": false, 00:23:43.060 "get_zone_info": false, 00:23:43.060 "zone_management": false, 00:23:43.060 "zone_append": false, 00:23:43.060 "compare": true, 00:23:43.060 "compare_and_write": true, 00:23:43.060 "abort": true, 00:23:43.060 "seek_hole": false, 00:23:43.060 "seek_data": false, 00:23:43.060 "copy": true, 00:23:43.060 "nvme_iov_md": false 00:23:43.060 }, 00:23:43.060 "memory_domains": [ 00:23:43.060 { 00:23:43.060 "dma_device_id": "system", 00:23:43.060 "dma_device_type": 1 00:23:43.060 } 00:23:43.060 ], 00:23:43.060 "driver_specific": { 00:23:43.060 "nvme": [ 00:23:43.060 { 00:23:43.060 "trid": { 00:23:43.060 "trtype": "TCP", 00:23:43.060 "adrfam": "IPv4", 00:23:43.060 "traddr": "10.0.0.2", 00:23:43.061 "trsvcid": "4420", 00:23:43.061 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:43.061 }, 00:23:43.061 "ctrlr_data": { 00:23:43.061 "cntlid": 2, 00:23:43.061 "vendor_id": "0x8086", 00:23:43.061 "model_number": "SPDK bdev Controller", 00:23:43.061 "serial_number": "00000000000000000000", 00:23:43.061 "firmware_revision": "24.09", 00:23:43.061 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:43.061 "oacs": { 00:23:43.061 "security": 0, 00:23:43.061 "format": 0, 00:23:43.061 "firmware": 0, 00:23:43.061 "ns_manage": 0 00:23:43.061 }, 00:23:43.061 "multi_ctrlr": true, 00:23:43.061 "ana_reporting": false 00:23:43.061 }, 00:23:43.061 "vs": { 00:23:43.061 "nvme_version": "1.3" 00:23:43.061 }, 00:23:43.061 "ns_data": { 00:23:43.061 "id": 1, 00:23:43.061 "can_share": true 00:23:43.061 } 00:23:43.061 } 00:23:43.061 ], 00:23:43.061 "mp_policy": "active_passive" 00:23:43.061 } 00:23:43.061 } 00:23:43.061 ] 00:23:43.061 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.061 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.061 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.061 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.wNXZ8uWDI1 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.wNXZ8uWDI1 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.319 [2024-07-25 10:38:46.796870] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:43.319 [2024-07-25 10:38:46.796991] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wNXZ8uWDI1 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.319 [2024-07-25 10:38:46.804889] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wNXZ8uWDI1 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.319 [2024-07-25 10:38:46.816933] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:43.319 [2024-07-25 10:38:46.816969] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:43.319 nvme0n1 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.319 [ 00:23:43.319 { 00:23:43.319 "name": "nvme0n1", 00:23:43.319 "aliases": [ 00:23:43.319 "80547094-2558-4ad5-9035-f60e6388310f" 00:23:43.319 ], 00:23:43.319 "product_name": "NVMe disk", 00:23:43.319 "block_size": 512, 00:23:43.319 "num_blocks": 2097152, 00:23:43.319 "uuid": "80547094-2558-4ad5-9035-f60e6388310f", 00:23:43.319 "assigned_rate_limits": { 00:23:43.319 "rw_ios_per_sec": 0, 00:23:43.319 "rw_mbytes_per_sec": 0, 00:23:43.319 "r_mbytes_per_sec": 0, 00:23:43.319 "w_mbytes_per_sec": 0 00:23:43.319 }, 00:23:43.319 "claimed": false, 00:23:43.319 "zoned": false, 00:23:43.319 "supported_io_types": { 00:23:43.319 "read": true, 00:23:43.319 "write": true, 00:23:43.319 "unmap": false, 00:23:43.319 "flush": true, 00:23:43.319 "reset": true, 00:23:43.319 "nvme_admin": true, 00:23:43.319 "nvme_io": true, 00:23:43.319 "nvme_io_md": false, 00:23:43.319 "write_zeroes": true, 00:23:43.319 "zcopy": false, 00:23:43.319 "get_zone_info": false, 00:23:43.319 "zone_management": false, 00:23:43.319 "zone_append": false, 00:23:43.319 "compare": true, 00:23:43.319 "compare_and_write": true, 00:23:43.319 "abort": true, 00:23:43.319 "seek_hole": false, 00:23:43.319 "seek_data": false, 00:23:43.319 "copy": true, 00:23:43.319 "nvme_iov_md": false 00:23:43.319 }, 00:23:43.319 "memory_domains": [ 00:23:43.319 { 00:23:43.319 "dma_device_id": "system", 00:23:43.319 "dma_device_type": 1 00:23:43.319 } 00:23:43.319 ], 00:23:43.319 "driver_specific": { 00:23:43.319 "nvme": [ 00:23:43.319 { 00:23:43.319 "trid": { 00:23:43.319 "trtype": "TCP", 00:23:43.319 "adrfam": "IPv4", 00:23:43.319 "traddr": "10.0.0.2", 00:23:43.319 "trsvcid": "4421", 00:23:43.319 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:43.319 }, 00:23:43.319 "ctrlr_data": { 00:23:43.319 "cntlid": 3, 00:23:43.319 "vendor_id": "0x8086", 00:23:43.319 "model_number": "SPDK bdev Controller", 00:23:43.319 "serial_number": "00000000000000000000", 00:23:43.319 "firmware_revision": "24.09", 00:23:43.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:43.319 "oacs": { 00:23:43.319 "security": 0, 00:23:43.319 "format": 0, 00:23:43.319 "firmware": 0, 00:23:43.319 "ns_manage": 0 00:23:43.319 }, 00:23:43.319 "multi_ctrlr": true, 00:23:43.319 "ana_reporting": false 00:23:43.319 }, 00:23:43.319 "vs": { 00:23:43.319 "nvme_version": "1.3" 00:23:43.319 }, 00:23:43.319 "ns_data": { 00:23:43.319 "id": 1, 00:23:43.319 "can_share": true 00:23:43.319 } 00:23:43.319 } 00:23:43.319 ], 00:23:43.319 "mp_policy": "active_passive" 00:23:43.319 } 00:23:43.319 } 00:23:43.319 ] 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.wNXZ8uWDI1 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:43.319 rmmod nvme_tcp 00:23:43.319 rmmod nvme_fabrics 00:23:43.319 rmmod nvme_keyring 00:23:43.319 10:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:43.319 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:43.319 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:43.319 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3969461 ']' 00:23:43.319 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3969461 00:23:43.319 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3969461 ']' 00:23:43.319 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3969461 00:23:43.319 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:43.319 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:43.319 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3969461 00:23:43.577 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:43.577 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:43.577 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3969461' 00:23:43.577 killing process with pid 3969461 00:23:43.577 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3969461 00:23:43.577 [2024-07-25 10:38:47.065727] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:43.577 [2024-07-25 10:38:47.065751] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:43.577 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3969461 00:23:43.577 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:43.577 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:43.577 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:43.577 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:43.577 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:43.577 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.577 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.577 10:38:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.110 10:38:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:46.110 00:23:46.110 real 0m10.897s 00:23:46.110 user 0m3.820s 00:23:46.110 sys 0m5.719s 00:23:46.110 10:38:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:46.110 10:38:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:46.110 ************************************ 00:23:46.110 END TEST nvmf_async_init 00:23:46.110 ************************************ 00:23:46.110 10:38:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:46.110 10:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:46.110 10:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:46.110 10:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.110 ************************************ 00:23:46.110 START TEST dma 00:23:46.110 ************************************ 00:23:46.110 10:38:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:46.110 * Looking for test storage... 00:23:46.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:46.110 10:38:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.110 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:46.110 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:46.111 00:23:46.111 real 0m0.144s 00:23:46.111 user 0m0.055s 00:23:46.111 sys 0m0.099s 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:46.111 ************************************ 00:23:46.111 END TEST dma 00:23:46.111 ************************************ 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.111 ************************************ 00:23:46.111 START TEST nvmf_identify 00:23:46.111 ************************************ 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:46.111 * Looking for test storage... 00:23:46.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.111 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:46.112 10:38:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:52.740 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:52.740 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:52.740 Found net devices under 0000:af:00.0: cvl_0_0 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:52.740 Found net devices under 0000:af:00.1: cvl_0_1 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.740 10:38:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.740 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.740 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.740 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:52.740 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.740 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.740 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.740 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:52.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:23:52.741 00:23:52.741 --- 10.0.0.2 ping statistics --- 00:23:52.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.741 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:23:52.741 00:23:52.741 --- 10.0.0.1 ping statistics --- 00:23:52.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.741 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3973429 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3973429 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3973429 ']' 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.741 10:38:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.741 [2024-07-25 10:38:56.352531] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:23:52.741 [2024-07-25 10:38:56.352582] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.741 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.741 [2024-07-25 10:38:56.425529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:53.000 [2024-07-25 10:38:56.497449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.000 [2024-07-25 10:38:56.497491] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.000 [2024-07-25 10:38:56.497500] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.000 [2024-07-25 10:38:56.497508] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.001 [2024-07-25 10:38:56.497532] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.001 [2024-07-25 10:38:56.497589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.001 [2024-07-25 10:38:56.497683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.001 [2024-07-25 10:38:56.497753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.001 [2024-07-25 10:38:56.497755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.568 [2024-07-25 10:38:57.162919] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.568 Malloc0 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.568 [2024-07-25 10:38:57.261565] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.568 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.829 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.829 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:53.829 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.829 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.829 [ 00:23:53.829 { 00:23:53.829 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:53.829 "subtype": "Discovery", 00:23:53.829 "listen_addresses": [ 00:23:53.829 { 00:23:53.829 "trtype": "TCP", 00:23:53.829 "adrfam": "IPv4", 00:23:53.829 "traddr": "10.0.0.2", 00:23:53.829 "trsvcid": "4420" 00:23:53.829 } 00:23:53.829 ], 00:23:53.829 "allow_any_host": true, 00:23:53.829 "hosts": [] 00:23:53.829 }, 00:23:53.829 { 00:23:53.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.829 "subtype": "NVMe", 00:23:53.829 "listen_addresses": [ 00:23:53.829 { 00:23:53.829 "trtype": "TCP", 00:23:53.829 "adrfam": "IPv4", 00:23:53.829 "traddr": "10.0.0.2", 00:23:53.829 "trsvcid": "4420" 00:23:53.829 } 00:23:53.829 ], 00:23:53.829 "allow_any_host": true, 00:23:53.829 "hosts": [], 00:23:53.829 "serial_number": "SPDK00000000000001", 00:23:53.829 "model_number": "SPDK bdev Controller", 00:23:53.829 "max_namespaces": 32, 00:23:53.829 "min_cntlid": 1, 00:23:53.829 "max_cntlid": 65519, 00:23:53.829 "namespaces": [ 00:23:53.829 { 00:23:53.829 "nsid": 1, 00:23:53.829 "bdev_name": "Malloc0", 00:23:53.829 "name": "Malloc0", 00:23:53.829 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:53.829 "eui64": "ABCDEF0123456789", 00:23:53.829 "uuid": "815371a9-6ee8-49c8-88ef-8dc5c3fde0fa" 00:23:53.829 } 00:23:53.829 ] 00:23:53.829 } 00:23:53.829 ] 00:23:53.829 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.829 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:53.829 [2024-07-25 10:38:57.321041] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:23:53.829 [2024-07-25 10:38:57.321088] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973708 ] 00:23:53.829 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.829 [2024-07-25 10:38:57.353097] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:53.829 [2024-07-25 10:38:57.353145] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:53.829 [2024-07-25 10:38:57.353151] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:53.829 [2024-07-25 10:38:57.353164] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:53.829 [2024-07-25 10:38:57.353173] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:53.829 [2024-07-25 10:38:57.353450] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:53.829 [2024-07-25 10:38:57.353477] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1cf0f00 0 00:23:53.829 [2024-07-25 10:38:57.367721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:53.829 [2024-07-25 10:38:57.367736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:53.829 [2024-07-25 10:38:57.367741] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:53.829 [2024-07-25 10:38:57.367746] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:53.829 [2024-07-25 10:38:57.367785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.829 [2024-07-25 10:38:57.367791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.829 [2024-07-25 10:38:57.367796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf0f00) 00:23:53.829 [2024-07-25 10:38:57.367810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:53.829 [2024-07-25 10:38:57.367826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5be40, cid 0, qid 0 00:23:53.829 [2024-07-25 10:38:57.375725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.829 [2024-07-25 10:38:57.375741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.829 [2024-07-25 10:38:57.375746] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.829 [2024-07-25 10:38:57.375752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5be40) on tqpair=0x1cf0f00 00:23:53.829 [2024-07-25 10:38:57.375764] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:53.829 [2024-07-25 10:38:57.375772] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:53.829 [2024-07-25 10:38:57.375778] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:53.829 [2024-07-25 10:38:57.375793] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.829 [2024-07-25 10:38:57.375798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.829 [2024-07-25 10:38:57.375803] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf0f00) 00:23:53.829 [2024-07-25 10:38:57.375811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.829 [2024-07-25 10:38:57.375828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5be40, cid 0, qid 0 00:23:53.829 [2024-07-25 10:38:57.375934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.829 [2024-07-25 10:38:57.375941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.829 [2024-07-25 10:38:57.375946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.829 [2024-07-25 10:38:57.375951] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5be40) on tqpair=0x1cf0f00 00:23:53.829 [2024-07-25 10:38:57.375960] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:53.830 [2024-07-25 10:38:57.375970] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:53.830 [2024-07-25 10:38:57.375978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.375983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.375987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf0f00) 00:23:53.830 [2024-07-25 10:38:57.375995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.830 [2024-07-25 10:38:57.376007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5be40, cid 0, qid 0 00:23:53.830 [2024-07-25 10:38:57.376092] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.830 [2024-07-25 10:38:57.376099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.830 [2024-07-25 10:38:57.376104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.376109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5be40) on tqpair=0x1cf0f00 00:23:53.830 [2024-07-25 10:38:57.376115] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:53.830 [2024-07-25 10:38:57.376125] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:53.830 [2024-07-25 10:38:57.376132] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.376137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.376141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf0f00) 00:23:53.830 [2024-07-25 10:38:57.376148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.830 [2024-07-25 10:38:57.376160] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5be40, cid 0, qid 0 00:23:53.830 [2024-07-25 10:38:57.376258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.830 [2024-07-25 10:38:57.376265] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.830 [2024-07-25 10:38:57.376269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.376274] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5be40) on tqpair=0x1cf0f00 00:23:53.830 [2024-07-25 10:38:57.376280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:53.830 [2024-07-25 10:38:57.376291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.376296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.376301] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf0f00) 00:23:53.830 [2024-07-25 10:38:57.376308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.830 [2024-07-25 10:38:57.376319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5be40, cid 0, qid 0 00:23:53.830 [2024-07-25 10:38:57.376405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.830 [2024-07-25 10:38:57.376415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.830 [2024-07-25 10:38:57.376419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.376424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5be40) on tqpair=0x1cf0f00 00:23:53.830 [2024-07-25 10:38:57.376430] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:53.830 [2024-07-25 10:38:57.376436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:53.830 [2024-07-25 10:38:57.376445] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:53.830 [2024-07-25 10:38:57.376552] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:53.830 [2024-07-25 10:38:57.376558] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:53.830 [2024-07-25 10:38:57.376568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.376573] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.376578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf0f00) 00:23:53.830 [2024-07-25 10:38:57.376585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.830 [2024-07-25 10:38:57.376596] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5be40, cid 0, qid 0 00:23:53.830 [2024-07-25 10:38:57.376681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.830 [2024-07-25 10:38:57.376688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.830 [2024-07-25 10:38:57.376692] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.376697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5be40) on tqpair=0x1cf0f00 00:23:53.830 [2024-07-25 10:38:57.376702] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:53.830 [2024-07-25 10:38:57.376713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.376725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.376729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf0f00) 00:23:53.830 [2024-07-25 10:38:57.376736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.830 [2024-07-25 10:38:57.376748] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5be40, cid 0, qid 0 00:23:53.830 [2024-07-25 10:38:57.376827] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.830 [2024-07-25 10:38:57.376834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.830 [2024-07-25 10:38:57.376839] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.376843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5be40) on tqpair=0x1cf0f00 00:23:53.830 [2024-07-25 10:38:57.376849] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:53.830 [2024-07-25 10:38:57.376855] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:53.830 [2024-07-25 10:38:57.376864] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:53.830 [2024-07-25 10:38:57.376874] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:53.830 [2024-07-25 10:38:57.376886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.376891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf0f00) 00:23:53.830 [2024-07-25 10:38:57.376898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.830 [2024-07-25 10:38:57.376910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5be40, cid 0, qid 0 00:23:53.830 [2024-07-25 10:38:57.377020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.830 [2024-07-25 10:38:57.377027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.830 [2024-07-25 10:38:57.377032] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.377037] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf0f00): datao=0, datal=4096, cccid=0 00:23:53.830 [2024-07-25 10:38:57.377043] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d5be40) on tqpair(0x1cf0f00): expected_datao=0, payload_size=4096 00:23:53.830 [2024-07-25 10:38:57.377049] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.377184] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.377190] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.377251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.830 [2024-07-25 10:38:57.377258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.830 [2024-07-25 10:38:57.377262] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.377267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5be40) on tqpair=0x1cf0f00 00:23:53.830 [2024-07-25 10:38:57.377275] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:53.830 [2024-07-25 10:38:57.377282] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:53.830 [2024-07-25 10:38:57.377287] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:53.830 [2024-07-25 10:38:57.377294] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:53.830 [2024-07-25 10:38:57.377300] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:53.830 [2024-07-25 10:38:57.377306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:53.830 [2024-07-25 10:38:57.377316] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:53.830 [2024-07-25 10:38:57.377326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.377331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.377336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf0f00) 00:23:53.830 [2024-07-25 10:38:57.377344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:53.830 [2024-07-25 10:38:57.377357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5be40, cid 0, qid 0 00:23:53.830 [2024-07-25 10:38:57.377445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.830 [2024-07-25 10:38:57.377451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.830 [2024-07-25 10:38:57.377456] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.377461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5be40) on tqpair=0x1cf0f00 00:23:53.830 [2024-07-25 10:38:57.377469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.377474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.377480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf0f00) 00:23:53.830 [2024-07-25 10:38:57.377487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.830 [2024-07-25 10:38:57.377494] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.830 [2024-07-25 10:38:57.377499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.377503] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1cf0f00) 00:23:53.831 [2024-07-25 10:38:57.377510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.831 [2024-07-25 10:38:57.377516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.377521] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.377526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1cf0f00) 00:23:53.831 [2024-07-25 10:38:57.377532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.831 [2024-07-25 10:38:57.377539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.377544] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.377548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf0f00) 00:23:53.831 [2024-07-25 10:38:57.377555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.831 [2024-07-25 10:38:57.377561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:53.831 [2024-07-25 10:38:57.377573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:53.831 [2024-07-25 10:38:57.377580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.377585] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf0f00) 00:23:53.831 [2024-07-25 10:38:57.377592] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.831 [2024-07-25 10:38:57.377606] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5be40, cid 0, qid 0 00:23:53.831 [2024-07-25 10:38:57.377612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5bfc0, cid 1, qid 0 00:23:53.831 [2024-07-25 10:38:57.377617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c140, cid 2, qid 0 00:23:53.831 [2024-07-25 10:38:57.377623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c2c0, cid 3, qid 0 00:23:53.831 [2024-07-25 10:38:57.377628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c440, cid 4, qid 0 00:23:53.831 [2024-07-25 10:38:57.377747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.831 [2024-07-25 10:38:57.377755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.831 [2024-07-25 10:38:57.377759] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.377764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c440) on tqpair=0x1cf0f00 00:23:53.831 [2024-07-25 10:38:57.377770] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:53.831 [2024-07-25 10:38:57.377776] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:53.831 [2024-07-25 10:38:57.377788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.377793] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf0f00) 00:23:53.831 [2024-07-25 10:38:57.377800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.831 [2024-07-25 10:38:57.377815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c440, cid 4, qid 0 00:23:53.831 [2024-07-25 10:38:57.377909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.831 [2024-07-25 10:38:57.377916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.831 [2024-07-25 10:38:57.377921] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.377926] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf0f00): datao=0, datal=4096, cccid=4 00:23:53.831 [2024-07-25 10:38:57.377932] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d5c440) on tqpair(0x1cf0f00): expected_datao=0, payload_size=4096 00:23:53.831 [2024-07-25 10:38:57.377937] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.378030] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.378035] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.418801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.831 [2024-07-25 10:38:57.418815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.831 [2024-07-25 10:38:57.418819] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.418825] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c440) on tqpair=0x1cf0f00 00:23:53.831 [2024-07-25 10:38:57.418839] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:53.831 [2024-07-25 10:38:57.418865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.418871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf0f00) 00:23:53.831 [2024-07-25 10:38:57.418879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.831 [2024-07-25 10:38:57.418888] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.418892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.418897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cf0f00) 00:23:53.831 [2024-07-25 10:38:57.418904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.831 [2024-07-25 10:38:57.418921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c440, cid 4, qid 0 00:23:53.831 [2024-07-25 10:38:57.418927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c5c0, cid 5, qid 0 00:23:53.831 [2024-07-25 10:38:57.419127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.831 [2024-07-25 10:38:57.419133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.831 [2024-07-25 10:38:57.419138] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.419143] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf0f00): datao=0, datal=1024, cccid=4 00:23:53.831 [2024-07-25 10:38:57.419149] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d5c440) on tqpair(0x1cf0f00): expected_datao=0, payload_size=1024 00:23:53.831 [2024-07-25 10:38:57.419154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.419162] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.419167] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.419173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.831 [2024-07-25 10:38:57.419179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.831 [2024-07-25 10:38:57.419184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.419188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c5c0) on tqpair=0x1cf0f00 00:23:53.831 [2024-07-25 10:38:57.463725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.831 [2024-07-25 10:38:57.463737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.831 [2024-07-25 10:38:57.463742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.463747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c440) on tqpair=0x1cf0f00 00:23:53.831 [2024-07-25 10:38:57.463763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.463769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf0f00) 00:23:53.831 [2024-07-25 10:38:57.463776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.831 [2024-07-25 10:38:57.463795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c440, cid 4, qid 0 00:23:53.831 [2024-07-25 10:38:57.463965] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.831 [2024-07-25 10:38:57.463972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.831 [2024-07-25 10:38:57.463976] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.463981] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf0f00): datao=0, datal=3072, cccid=4 00:23:53.831 [2024-07-25 10:38:57.463987] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d5c440) on tqpair(0x1cf0f00): expected_datao=0, payload_size=3072 00:23:53.831 [2024-07-25 10:38:57.463992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.464072] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.464077] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.464142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.831 [2024-07-25 10:38:57.464149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.831 [2024-07-25 10:38:57.464154] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.464159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c440) on tqpair=0x1cf0f00 00:23:53.831 [2024-07-25 10:38:57.464168] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.464173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf0f00) 00:23:53.831 [2024-07-25 10:38:57.464179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.831 [2024-07-25 10:38:57.464197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c440, cid 4, qid 0 00:23:53.831 [2024-07-25 10:38:57.464288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.831 [2024-07-25 10:38:57.464295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.831 [2024-07-25 10:38:57.464300] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.464304] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf0f00): datao=0, datal=8, cccid=4 00:23:53.831 [2024-07-25 10:38:57.464310] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d5c440) on tqpair(0x1cf0f00): expected_datao=0, payload_size=8 00:23:53.831 [2024-07-25 10:38:57.464316] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.464323] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.464327] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.504826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.831 [2024-07-25 10:38:57.504840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.831 [2024-07-25 10:38:57.504845] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.831 [2024-07-25 10:38:57.504850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c440) on tqpair=0x1cf0f00 00:23:53.831 ===================================================== 00:23:53.831 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:53.831 ===================================================== 00:23:53.831 Controller Capabilities/Features 00:23:53.832 ================================ 00:23:53.832 Vendor ID: 0000 00:23:53.832 Subsystem Vendor ID: 0000 00:23:53.832 Serial Number: .................... 00:23:53.832 Model Number: ........................................ 00:23:53.832 Firmware Version: 24.09 00:23:53.832 Recommended Arb Burst: 0 00:23:53.832 IEEE OUI Identifier: 00 00 00 00:23:53.832 Multi-path I/O 00:23:53.832 May have multiple subsystem ports: No 00:23:53.832 May have multiple controllers: No 00:23:53.832 Associated with SR-IOV VF: No 00:23:53.832 Max Data Transfer Size: 131072 00:23:53.832 Max Number of Namespaces: 0 00:23:53.832 Max Number of I/O Queues: 1024 00:23:53.832 NVMe Specification Version (VS): 1.3 00:23:53.832 NVMe Specification Version (Identify): 1.3 00:23:53.832 Maximum Queue Entries: 128 00:23:53.832 Contiguous Queues Required: Yes 00:23:53.832 Arbitration Mechanisms Supported 00:23:53.832 Weighted Round Robin: Not Supported 00:23:53.832 Vendor Specific: Not Supported 00:23:53.832 Reset Timeout: 15000 ms 00:23:53.832 Doorbell Stride: 4 bytes 00:23:53.832 NVM Subsystem Reset: Not Supported 00:23:53.832 Command Sets Supported 00:23:53.832 NVM Command Set: Supported 00:23:53.832 Boot Partition: Not Supported 00:23:53.832 Memory Page Size Minimum: 4096 bytes 00:23:53.832 Memory Page Size Maximum: 4096 bytes 00:23:53.832 Persistent Memory Region: Not Supported 00:23:53.832 Optional Asynchronous Events Supported 00:23:53.832 Namespace Attribute Notices: Not Supported 00:23:53.832 Firmware Activation Notices: Not Supported 00:23:53.832 ANA Change Notices: Not Supported 00:23:53.832 PLE Aggregate Log Change Notices: Not Supported 00:23:53.832 LBA Status Info Alert Notices: Not Supported 00:23:53.832 EGE Aggregate Log Change Notices: Not Supported 00:23:53.832 Normal NVM Subsystem Shutdown event: Not Supported 00:23:53.832 Zone Descriptor Change Notices: Not Supported 00:23:53.832 Discovery Log Change Notices: Supported 00:23:53.832 Controller Attributes 00:23:53.832 128-bit Host Identifier: Not Supported 00:23:53.832 Non-Operational Permissive Mode: Not Supported 00:23:53.832 NVM Sets: Not Supported 00:23:53.832 Read Recovery Levels: Not Supported 00:23:53.832 Endurance Groups: Not Supported 00:23:53.832 Predictable Latency Mode: Not Supported 00:23:53.832 Traffic Based Keep ALive: Not Supported 00:23:53.832 Namespace Granularity: Not Supported 00:23:53.832 SQ Associations: Not Supported 00:23:53.832 UUID List: Not Supported 00:23:53.832 Multi-Domain Subsystem: Not Supported 00:23:53.832 Fixed Capacity Management: Not Supported 00:23:53.832 Variable Capacity Management: Not Supported 00:23:53.832 Delete Endurance Group: Not Supported 00:23:53.832 Delete NVM Set: Not Supported 00:23:53.832 Extended LBA Formats Supported: Not Supported 00:23:53.832 Flexible Data Placement Supported: Not Supported 00:23:53.832 00:23:53.832 Controller Memory Buffer Support 00:23:53.832 ================================ 00:23:53.832 Supported: No 00:23:53.832 00:23:53.832 Persistent Memory Region Support 00:23:53.832 ================================ 00:23:53.832 Supported: No 00:23:53.832 00:23:53.832 Admin Command Set Attributes 00:23:53.832 ============================ 00:23:53.832 Security Send/Receive: Not Supported 00:23:53.832 Format NVM: Not Supported 00:23:53.832 Firmware Activate/Download: Not Supported 00:23:53.832 Namespace Management: Not Supported 00:23:53.832 Device Self-Test: Not Supported 00:23:53.832 Directives: Not Supported 00:23:53.832 NVMe-MI: Not Supported 00:23:53.832 Virtualization Management: Not Supported 00:23:53.832 Doorbell Buffer Config: Not Supported 00:23:53.832 Get LBA Status Capability: Not Supported 00:23:53.832 Command & Feature Lockdown Capability: Not Supported 00:23:53.832 Abort Command Limit: 1 00:23:53.832 Async Event Request Limit: 4 00:23:53.832 Number of Firmware Slots: N/A 00:23:53.832 Firmware Slot 1 Read-Only: N/A 00:23:53.832 Firmware Activation Without Reset: N/A 00:23:53.832 Multiple Update Detection Support: N/A 00:23:53.832 Firmware Update Granularity: No Information Provided 00:23:53.832 Per-Namespace SMART Log: No 00:23:53.832 Asymmetric Namespace Access Log Page: Not Supported 00:23:53.832 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:53.832 Command Effects Log Page: Not Supported 00:23:53.832 Get Log Page Extended Data: Supported 00:23:53.832 Telemetry Log Pages: Not Supported 00:23:53.832 Persistent Event Log Pages: Not Supported 00:23:53.832 Supported Log Pages Log Page: May Support 00:23:53.832 Commands Supported & Effects Log Page: Not Supported 00:23:53.832 Feature Identifiers & Effects Log Page:May Support 00:23:53.832 NVMe-MI Commands & Effects Log Page: May Support 00:23:53.832 Data Area 4 for Telemetry Log: Not Supported 00:23:53.832 Error Log Page Entries Supported: 128 00:23:53.832 Keep Alive: Not Supported 00:23:53.832 00:23:53.832 NVM Command Set Attributes 00:23:53.832 ========================== 00:23:53.832 Submission Queue Entry Size 00:23:53.832 Max: 1 00:23:53.832 Min: 1 00:23:53.832 Completion Queue Entry Size 00:23:53.832 Max: 1 00:23:53.832 Min: 1 00:23:53.832 Number of Namespaces: 0 00:23:53.832 Compare Command: Not Supported 00:23:53.832 Write Uncorrectable Command: Not Supported 00:23:53.832 Dataset Management Command: Not Supported 00:23:53.832 Write Zeroes Command: Not Supported 00:23:53.832 Set Features Save Field: Not Supported 00:23:53.832 Reservations: Not Supported 00:23:53.832 Timestamp: Not Supported 00:23:53.832 Copy: Not Supported 00:23:53.832 Volatile Write Cache: Not Present 00:23:53.832 Atomic Write Unit (Normal): 1 00:23:53.832 Atomic Write Unit (PFail): 1 00:23:53.832 Atomic Compare & Write Unit: 1 00:23:53.832 Fused Compare & Write: Supported 00:23:53.832 Scatter-Gather List 00:23:53.832 SGL Command Set: Supported 00:23:53.832 SGL Keyed: Supported 00:23:53.832 SGL Bit Bucket Descriptor: Not Supported 00:23:53.832 SGL Metadata Pointer: Not Supported 00:23:53.832 Oversized SGL: Not Supported 00:23:53.832 SGL Metadata Address: Not Supported 00:23:53.832 SGL Offset: Supported 00:23:53.832 Transport SGL Data Block: Not Supported 00:23:53.832 Replay Protected Memory Block: Not Supported 00:23:53.832 00:23:53.832 Firmware Slot Information 00:23:53.832 ========================= 00:23:53.832 Active slot: 0 00:23:53.832 00:23:53.832 00:23:53.832 Error Log 00:23:53.832 ========= 00:23:53.832 00:23:53.832 Active Namespaces 00:23:53.832 ================= 00:23:53.832 Discovery Log Page 00:23:53.832 ================== 00:23:53.832 Generation Counter: 2 00:23:53.832 Number of Records: 2 00:23:53.832 Record Format: 0 00:23:53.832 00:23:53.832 Discovery Log Entry 0 00:23:53.832 ---------------------- 00:23:53.832 Transport Type: 3 (TCP) 00:23:53.832 Address Family: 1 (IPv4) 00:23:53.832 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:53.832 Entry Flags: 00:23:53.832 Duplicate Returned Information: 1 00:23:53.832 Explicit Persistent Connection Support for Discovery: 1 00:23:53.832 Transport Requirements: 00:23:53.832 Secure Channel: Not Required 00:23:53.832 Port ID: 0 (0x0000) 00:23:53.832 Controller ID: 65535 (0xffff) 00:23:53.832 Admin Max SQ Size: 128 00:23:53.832 Transport Service Identifier: 4420 00:23:53.832 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:53.832 Transport Address: 10.0.0.2 00:23:53.832 Discovery Log Entry 1 00:23:53.832 ---------------------- 00:23:53.832 Transport Type: 3 (TCP) 00:23:53.832 Address Family: 1 (IPv4) 00:23:53.832 Subsystem Type: 2 (NVM Subsystem) 00:23:53.832 Entry Flags: 00:23:53.832 Duplicate Returned Information: 0 00:23:53.832 Explicit Persistent Connection Support for Discovery: 0 00:23:53.832 Transport Requirements: 00:23:53.832 Secure Channel: Not Required 00:23:53.832 Port ID: 0 (0x0000) 00:23:53.832 Controller ID: 65535 (0xffff) 00:23:53.832 Admin Max SQ Size: 128 00:23:53.832 Transport Service Identifier: 4420 00:23:53.832 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:53.832 Transport Address: 10.0.0.2 [2024-07-25 10:38:57.504934] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:53.832 [2024-07-25 10:38:57.504946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5be40) on tqpair=0x1cf0f00 00:23:53.832 [2024-07-25 10:38:57.504954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.832 [2024-07-25 10:38:57.504961] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5bfc0) on tqpair=0x1cf0f00 00:23:53.833 [2024-07-25 10:38:57.504966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.833 [2024-07-25 10:38:57.504973] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c140) on tqpair=0x1cf0f00 00:23:53.833 [2024-07-25 10:38:57.504978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.833 [2024-07-25 10:38:57.504984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c2c0) on tqpair=0x1cf0f00 00:23:53.833 [2024-07-25 10:38:57.504990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.833 [2024-07-25 10:38:57.505001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf0f00) 00:23:53.833 [2024-07-25 10:38:57.505019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.833 [2024-07-25 10:38:57.505035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c2c0, cid 3, qid 0 00:23:53.833 [2024-07-25 10:38:57.505149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.833 [2024-07-25 10:38:57.505156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.833 [2024-07-25 10:38:57.505161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c2c0) on tqpair=0x1cf0f00 00:23:53.833 [2024-07-25 10:38:57.505173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505178] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505182] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf0f00) 00:23:53.833 [2024-07-25 10:38:57.505189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.833 [2024-07-25 10:38:57.505205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c2c0, cid 3, qid 0 00:23:53.833 [2024-07-25 10:38:57.505332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.833 [2024-07-25 10:38:57.505339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.833 [2024-07-25 10:38:57.505344] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c2c0) on tqpair=0x1cf0f00 00:23:53.833 [2024-07-25 10:38:57.505354] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:53.833 [2024-07-25 10:38:57.505360] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:53.833 [2024-07-25 10:38:57.505371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505376] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505381] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf0f00) 00:23:53.833 [2024-07-25 10:38:57.505388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.833 [2024-07-25 10:38:57.505399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c2c0, cid 3, qid 0 00:23:53.833 [2024-07-25 10:38:57.505480] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.833 [2024-07-25 10:38:57.505489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.833 [2024-07-25 10:38:57.505493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c2c0) on tqpair=0x1cf0f00 00:23:53.833 [2024-07-25 10:38:57.505509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf0f00) 00:23:53.833 [2024-07-25 10:38:57.505526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.833 [2024-07-25 10:38:57.505537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c2c0, cid 3, qid 0 00:23:53.833 [2024-07-25 10:38:57.505619] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.833 [2024-07-25 10:38:57.505626] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.833 [2024-07-25 10:38:57.505631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c2c0) on tqpair=0x1cf0f00 00:23:53.833 [2024-07-25 10:38:57.505645] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf0f00) 00:23:53.833 [2024-07-25 10:38:57.505662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.833 [2024-07-25 10:38:57.505673] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c2c0, cid 3, qid 0 00:23:53.833 [2024-07-25 10:38:57.505761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.833 [2024-07-25 10:38:57.505768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.833 [2024-07-25 10:38:57.505773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c2c0) on tqpair=0x1cf0f00 00:23:53.833 [2024-07-25 10:38:57.505787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf0f00) 00:23:53.833 [2024-07-25 10:38:57.505804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.833 [2024-07-25 10:38:57.505815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c2c0, cid 3, qid 0 00:23:53.833 [2024-07-25 10:38:57.505902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.833 [2024-07-25 10:38:57.505909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.833 [2024-07-25 10:38:57.505913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c2c0) on tqpair=0x1cf0f00 00:23:53.833 [2024-07-25 10:38:57.505927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.505937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf0f00) 00:23:53.833 [2024-07-25 10:38:57.505944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.833 [2024-07-25 10:38:57.505955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c2c0, cid 3, qid 0 00:23:53.833 [2024-07-25 10:38:57.506042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.833 [2024-07-25 10:38:57.506048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.833 [2024-07-25 10:38:57.506055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.506060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c2c0) on tqpair=0x1cf0f00 00:23:53.833 [2024-07-25 10:38:57.506071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.506076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.506081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf0f00) 00:23:53.833 [2024-07-25 10:38:57.506088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.833 [2024-07-25 10:38:57.506099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c2c0, cid 3, qid 0 00:23:53.833 [2024-07-25 10:38:57.506181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.833 [2024-07-25 10:38:57.506188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.833 [2024-07-25 10:38:57.506193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.506197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c2c0) on tqpair=0x1cf0f00 00:23:53.833 [2024-07-25 10:38:57.506208] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.506213] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.506217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf0f00) 00:23:53.833 [2024-07-25 10:38:57.506224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.833 [2024-07-25 10:38:57.506235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c2c0, cid 3, qid 0 00:23:53.833 [2024-07-25 10:38:57.506322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.833 [2024-07-25 10:38:57.506329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.833 [2024-07-25 10:38:57.506334] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.506338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c2c0) on tqpair=0x1cf0f00 00:23:53.833 [2024-07-25 10:38:57.506348] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.506353] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.506357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf0f00) 00:23:53.833 [2024-07-25 10:38:57.506364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.833 [2024-07-25 10:38:57.506375] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c2c0, cid 3, qid 0 00:23:53.833 [2024-07-25 10:38:57.506464] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.833 [2024-07-25 10:38:57.506470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.833 [2024-07-25 10:38:57.506475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.506480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c2c0) on tqpair=0x1cf0f00 00:23:53.833 [2024-07-25 10:38:57.506490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.506495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.833 [2024-07-25 10:38:57.506499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf0f00) 00:23:53.833 [2024-07-25 10:38:57.506506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.833 [2024-07-25 10:38:57.506518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c2c0, cid 3, qid 0 00:23:53.833 [2024-07-25 10:38:57.506601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.834 [2024-07-25 10:38:57.506608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.834 [2024-07-25 10:38:57.506613] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.834 [2024-07-25 10:38:57.506619] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c2c0) on tqpair=0x1cf0f00 00:23:53.834 [2024-07-25 10:38:57.506629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.834 [2024-07-25 10:38:57.506634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.834 [2024-07-25 10:38:57.506639] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf0f00) 00:23:53.834 [2024-07-25 10:38:57.506646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.834 [2024-07-25 10:38:57.506657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c2c0, cid 3, qid 0 00:23:53.834 [2024-07-25 10:38:57.510724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.834 [2024-07-25 10:38:57.510736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.834 [2024-07-25 10:38:57.510740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.834 [2024-07-25 10:38:57.510745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c2c0) on tqpair=0x1cf0f00 00:23:53.834 [2024-07-25 10:38:57.510758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.834 [2024-07-25 10:38:57.510763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.834 [2024-07-25 10:38:57.510768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf0f00) 00:23:53.834 [2024-07-25 10:38:57.510775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.834 [2024-07-25 10:38:57.510790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d5c2c0, cid 3, qid 0 00:23:53.834 [2024-07-25 10:38:57.510877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.834 [2024-07-25 10:38:57.510884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.834 [2024-07-25 10:38:57.510889] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.834 [2024-07-25 10:38:57.510893] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d5c2c0) on tqpair=0x1cf0f00 00:23:53.834 [2024-07-25 10:38:57.510902] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:23:53.834 00:23:53.834 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:54.096 [2024-07-25 10:38:57.555448] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:23:54.096 [2024-07-25 10:38:57.555502] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973711 ] 00:23:54.096 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.096 [2024-07-25 10:38:57.585769] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:54.096 [2024-07-25 10:38:57.585812] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:54.096 [2024-07-25 10:38:57.585818] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:54.096 [2024-07-25 10:38:57.585831] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:54.096 [2024-07-25 10:38:57.585840] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:54.096 [2024-07-25 10:38:57.586132] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:54.096 [2024-07-25 10:38:57.586155] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7ecf00 0 00:23:54.096 [2024-07-25 10:38:57.592720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:54.096 [2024-07-25 10:38:57.592735] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:54.096 [2024-07-25 10:38:57.592740] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:54.096 [2024-07-25 10:38:57.592745] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:54.096 [2024-07-25 10:38:57.592778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.096 [2024-07-25 10:38:57.592784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.096 [2024-07-25 10:38:57.592788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ecf00) 00:23:54.096 [2024-07-25 10:38:57.592800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:54.096 [2024-07-25 10:38:57.592817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857e40, cid 0, qid 0 00:23:54.096 [2024-07-25 10:38:57.600724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.096 [2024-07-25 10:38:57.600733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.096 [2024-07-25 10:38:57.600738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.096 [2024-07-25 10:38:57.600743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x857e40) on tqpair=0x7ecf00 00:23:54.096 [2024-07-25 10:38:57.600753] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:54.096 [2024-07-25 10:38:57.600759] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:54.096 [2024-07-25 10:38:57.600766] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:54.096 [2024-07-25 10:38:57.600779] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.096 [2024-07-25 10:38:57.600784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.096 [2024-07-25 10:38:57.600788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ecf00) 00:23:54.096 [2024-07-25 10:38:57.600796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.096 [2024-07-25 10:38:57.600811] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857e40, cid 0, qid 0 00:23:54.096 [2024-07-25 10:38:57.600991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.096 [2024-07-25 10:38:57.600998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.096 [2024-07-25 10:38:57.601002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.096 [2024-07-25 10:38:57.601007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x857e40) on tqpair=0x7ecf00 00:23:54.096 [2024-07-25 10:38:57.601014] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:54.096 [2024-07-25 10:38:57.601024] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:54.096 [2024-07-25 10:38:57.601031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.096 [2024-07-25 10:38:57.601036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.096 [2024-07-25 10:38:57.601041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ecf00) 00:23:54.096 [2024-07-25 10:38:57.601048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.096 [2024-07-25 10:38:57.601060] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857e40, cid 0, qid 0 00:23:54.096 [2024-07-25 10:38:57.601153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.096 [2024-07-25 10:38:57.601160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.096 [2024-07-25 10:38:57.601165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.096 [2024-07-25 10:38:57.601169] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x857e40) on tqpair=0x7ecf00 00:23:54.097 [2024-07-25 10:38:57.601177] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:54.097 [2024-07-25 10:38:57.601187] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:54.097 [2024-07-25 10:38:57.601194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.601199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.601203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ecf00) 00:23:54.097 [2024-07-25 10:38:57.601210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.097 [2024-07-25 10:38:57.601222] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857e40, cid 0, qid 0 00:23:54.097 [2024-07-25 10:38:57.601314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.097 [2024-07-25 10:38:57.601320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.097 [2024-07-25 10:38:57.601325] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.601329] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x857e40) on tqpair=0x7ecf00 00:23:54.097 [2024-07-25 10:38:57.601335] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:54.097 [2024-07-25 10:38:57.601346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.601351] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.601356] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ecf00) 00:23:54.097 [2024-07-25 10:38:57.601363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.097 [2024-07-25 10:38:57.601374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857e40, cid 0, qid 0 00:23:54.097 [2024-07-25 10:38:57.601456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.097 [2024-07-25 10:38:57.601462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.097 [2024-07-25 10:38:57.601467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.601471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x857e40) on tqpair=0x7ecf00 00:23:54.097 [2024-07-25 10:38:57.601476] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:54.097 [2024-07-25 10:38:57.601482] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:54.097 [2024-07-25 10:38:57.601491] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:54.097 [2024-07-25 10:38:57.601598] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:54.097 [2024-07-25 10:38:57.601603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:54.097 [2024-07-25 10:38:57.601611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.601616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.601621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ecf00) 00:23:54.097 [2024-07-25 10:38:57.601628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.097 [2024-07-25 10:38:57.601639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857e40, cid 0, qid 0 00:23:54.097 [2024-07-25 10:38:57.601731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.097 [2024-07-25 10:38:57.601739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.097 [2024-07-25 10:38:57.601745] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.601750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x857e40) on tqpair=0x7ecf00 00:23:54.097 [2024-07-25 10:38:57.601755] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:54.097 [2024-07-25 10:38:57.601766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.601771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.601776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ecf00) 00:23:54.097 [2024-07-25 10:38:57.601782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.097 [2024-07-25 10:38:57.601794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857e40, cid 0, qid 0 00:23:54.097 [2024-07-25 10:38:57.601877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.097 [2024-07-25 10:38:57.601884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.097 [2024-07-25 10:38:57.601888] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.601893] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x857e40) on tqpair=0x7ecf00 00:23:54.097 [2024-07-25 10:38:57.601898] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:54.097 [2024-07-25 10:38:57.601904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:54.097 [2024-07-25 10:38:57.601914] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:54.097 [2024-07-25 10:38:57.601923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:54.097 [2024-07-25 10:38:57.601932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.601937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ecf00) 00:23:54.097 [2024-07-25 10:38:57.601943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.097 [2024-07-25 10:38:57.601955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857e40, cid 0, qid 0 00:23:54.097 [2024-07-25 10:38:57.602075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.097 [2024-07-25 10:38:57.602082] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.097 [2024-07-25 10:38:57.602086] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.602091] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ecf00): datao=0, datal=4096, cccid=0 00:23:54.097 [2024-07-25 10:38:57.602096] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x857e40) on tqpair(0x7ecf00): expected_datao=0, payload_size=4096 00:23:54.097 [2024-07-25 10:38:57.602102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.602199] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.602204] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.642799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.097 [2024-07-25 10:38:57.642812] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.097 [2024-07-25 10:38:57.642817] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.642822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x857e40) on tqpair=0x7ecf00 00:23:54.097 [2024-07-25 10:38:57.642831] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:54.097 [2024-07-25 10:38:57.642837] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:54.097 [2024-07-25 10:38:57.642845] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:54.097 [2024-07-25 10:38:57.642850] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:54.097 [2024-07-25 10:38:57.642856] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:54.097 [2024-07-25 10:38:57.642862] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:54.097 [2024-07-25 10:38:57.642872] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:54.097 [2024-07-25 10:38:57.642882] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.642888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.642892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ecf00) 00:23:54.097 [2024-07-25 10:38:57.642900] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:54.097 [2024-07-25 10:38:57.642914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857e40, cid 0, qid 0 00:23:54.097 [2024-07-25 10:38:57.642999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.097 [2024-07-25 10:38:57.643006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.097 [2024-07-25 10:38:57.643011] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.643015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x857e40) on tqpair=0x7ecf00 00:23:54.097 [2024-07-25 10:38:57.643022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.643027] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.643032] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ecf00) 00:23:54.097 [2024-07-25 10:38:57.643038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.097 [2024-07-25 10:38:57.643045] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.643050] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.643054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7ecf00) 00:23:54.097 [2024-07-25 10:38:57.643061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.097 [2024-07-25 10:38:57.643067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.643072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.643076] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7ecf00) 00:23:54.097 [2024-07-25 10:38:57.643083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.097 [2024-07-25 10:38:57.643089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.643094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.097 [2024-07-25 10:38:57.643099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ecf00) 00:23:54.097 [2024-07-25 10:38:57.643105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.097 [2024-07-25 10:38:57.643111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:54.097 [2024-07-25 10:38:57.643123] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.643130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.643136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7ecf00) 00:23:54.098 [2024-07-25 10:38:57.643143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.098 [2024-07-25 10:38:57.643156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857e40, cid 0, qid 0 00:23:54.098 [2024-07-25 10:38:57.643162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x857fc0, cid 1, qid 0 00:23:54.098 [2024-07-25 10:38:57.643167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x858140, cid 2, qid 0 00:23:54.098 [2024-07-25 10:38:57.643173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8582c0, cid 3, qid 0 00:23:54.098 [2024-07-25 10:38:57.643178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x858440, cid 4, qid 0 00:23:54.098 [2024-07-25 10:38:57.643288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.098 [2024-07-25 10:38:57.643295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.098 [2024-07-25 10:38:57.643299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.643304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x858440) on tqpair=0x7ecf00 00:23:54.098 [2024-07-25 10:38:57.643310] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:54.098 [2024-07-25 10:38:57.643316] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.643328] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.643335] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.643342] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.643347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.643351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7ecf00) 00:23:54.098 [2024-07-25 10:38:57.643358] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:54.098 [2024-07-25 10:38:57.643370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x858440, cid 4, qid 0 00:23:54.098 [2024-07-25 10:38:57.643457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.098 [2024-07-25 10:38:57.643464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.098 [2024-07-25 10:38:57.643468] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.643473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x858440) on tqpair=0x7ecf00 00:23:54.098 [2024-07-25 10:38:57.643525] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.643536] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.643544] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.643548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7ecf00) 00:23:54.098 [2024-07-25 10:38:57.643555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.098 [2024-07-25 10:38:57.643567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x858440, cid 4, qid 0 00:23:54.098 [2024-07-25 10:38:57.643660] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.098 [2024-07-25 10:38:57.643667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.098 [2024-07-25 10:38:57.643671] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.643678] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ecf00): datao=0, datal=4096, cccid=4 00:23:54.098 [2024-07-25 10:38:57.643684] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x858440) on tqpair(0x7ecf00): expected_datao=0, payload_size=4096 00:23:54.098 [2024-07-25 10:38:57.643690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.643697] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.643701] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.647723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.098 [2024-07-25 10:38:57.647731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.098 [2024-07-25 10:38:57.647736] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.647741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x858440) on tqpair=0x7ecf00 00:23:54.098 [2024-07-25 10:38:57.647751] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:54.098 [2024-07-25 10:38:57.647767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.647778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.647786] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.647791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7ecf00) 00:23:54.098 [2024-07-25 10:38:57.647798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.098 [2024-07-25 10:38:57.647813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x858440, cid 4, qid 0 00:23:54.098 [2024-07-25 10:38:57.647924] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.098 [2024-07-25 10:38:57.647931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.098 [2024-07-25 10:38:57.647935] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.647940] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ecf00): datao=0, datal=4096, cccid=4 00:23:54.098 [2024-07-25 10:38:57.647946] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x858440) on tqpair(0x7ecf00): expected_datao=0, payload_size=4096 00:23:54.098 [2024-07-25 10:38:57.647951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.647958] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.647963] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.648063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.098 [2024-07-25 10:38:57.648069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.098 [2024-07-25 10:38:57.648073] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.648078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x858440) on tqpair=0x7ecf00 00:23:54.098 [2024-07-25 10:38:57.648091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.648101] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.648109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.648114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7ecf00) 00:23:54.098 [2024-07-25 10:38:57.648121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.098 [2024-07-25 10:38:57.648134] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x858440, cid 4, qid 0 00:23:54.098 [2024-07-25 10:38:57.648229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.098 [2024-07-25 10:38:57.648236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.098 [2024-07-25 10:38:57.648240] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.648245] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ecf00): datao=0, datal=4096, cccid=4 00:23:54.098 [2024-07-25 10:38:57.648251] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x858440) on tqpair(0x7ecf00): expected_datao=0, payload_size=4096 00:23:54.098 [2024-07-25 10:38:57.648256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.648263] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.648267] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.648361] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.098 [2024-07-25 10:38:57.648367] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.098 [2024-07-25 10:38:57.648371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.648376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x858440) on tqpair=0x7ecf00 00:23:54.098 [2024-07-25 10:38:57.648384] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.648393] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.648402] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.648411] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.648417] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.648423] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.648429] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:54.098 [2024-07-25 10:38:57.648435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:54.098 [2024-07-25 10:38:57.648442] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:54.098 [2024-07-25 10:38:57.648457] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.648462] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7ecf00) 00:23:54.098 [2024-07-25 10:38:57.648469] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.098 [2024-07-25 10:38:57.648476] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.648481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.098 [2024-07-25 10:38:57.648485] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7ecf00) 00:23:54.098 [2024-07-25 10:38:57.648492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.099 [2024-07-25 10:38:57.648506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x858440, cid 4, qid 0 00:23:54.099 [2024-07-25 10:38:57.648512] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8585c0, cid 5, qid 0 00:23:54.099 [2024-07-25 10:38:57.648613] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.099 [2024-07-25 10:38:57.648620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.099 [2024-07-25 10:38:57.648627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.648631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x858440) on tqpair=0x7ecf00 00:23:54.099 [2024-07-25 10:38:57.648638] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.099 [2024-07-25 10:38:57.648645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.099 [2024-07-25 10:38:57.648649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.648654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8585c0) on tqpair=0x7ecf00 00:23:54.099 [2024-07-25 10:38:57.648664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.648669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7ecf00) 00:23:54.099 [2024-07-25 10:38:57.648676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.099 [2024-07-25 10:38:57.648687] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8585c0, cid 5, qid 0 00:23:54.099 [2024-07-25 10:38:57.648852] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.099 [2024-07-25 10:38:57.648859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.099 [2024-07-25 10:38:57.648863] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.648869] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8585c0) on tqpair=0x7ecf00 00:23:54.099 [2024-07-25 10:38:57.648879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.648884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7ecf00) 00:23:54.099 [2024-07-25 10:38:57.648891] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.099 [2024-07-25 10:38:57.648903] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8585c0, cid 5, qid 0 00:23:54.099 [2024-07-25 10:38:57.649057] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.099 [2024-07-25 10:38:57.649064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.099 [2024-07-25 10:38:57.649068] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649073] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8585c0) on tqpair=0x7ecf00 00:23:54.099 [2024-07-25 10:38:57.649083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649088] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7ecf00) 00:23:54.099 [2024-07-25 10:38:57.649095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.099 [2024-07-25 10:38:57.649106] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8585c0, cid 5, qid 0 00:23:54.099 [2024-07-25 10:38:57.649197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.099 [2024-07-25 10:38:57.649204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.099 [2024-07-25 10:38:57.649208] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8585c0) on tqpair=0x7ecf00 00:23:54.099 [2024-07-25 10:38:57.649228] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7ecf00) 00:23:54.099 [2024-07-25 10:38:57.649240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.099 [2024-07-25 10:38:57.649248] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7ecf00) 00:23:54.099 [2024-07-25 10:38:57.649259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.099 [2024-07-25 10:38:57.649268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x7ecf00) 00:23:54.099 [2024-07-25 10:38:57.649280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.099 [2024-07-25 10:38:57.649287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7ecf00) 00:23:54.099 [2024-07-25 10:38:57.649299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.099 [2024-07-25 10:38:57.649311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8585c0, cid 5, qid 0 00:23:54.099 [2024-07-25 10:38:57.649317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x858440, cid 4, qid 0 00:23:54.099 [2024-07-25 10:38:57.649322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x858740, cid 6, qid 0 00:23:54.099 [2024-07-25 10:38:57.649328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8588c0, cid 7, qid 0 00:23:54.099 [2024-07-25 10:38:57.649550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.099 [2024-07-25 10:38:57.649557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.099 [2024-07-25 10:38:57.649562] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649566] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ecf00): datao=0, datal=8192, cccid=5 00:23:54.099 [2024-07-25 10:38:57.649572] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8585c0) on tqpair(0x7ecf00): expected_datao=0, payload_size=8192 00:23:54.099 [2024-07-25 10:38:57.649578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649585] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649590] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.099 [2024-07-25 10:38:57.649602] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.099 [2024-07-25 10:38:57.649606] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649611] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ecf00): datao=0, datal=512, cccid=4 00:23:54.099 [2024-07-25 10:38:57.649617] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x858440) on tqpair(0x7ecf00): expected_datao=0, payload_size=512 00:23:54.099 [2024-07-25 10:38:57.649622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649629] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649633] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.099 [2024-07-25 10:38:57.649645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.099 [2024-07-25 10:38:57.649650] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649654] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ecf00): datao=0, datal=512, cccid=6 00:23:54.099 [2024-07-25 10:38:57.649660] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x858740) on tqpair(0x7ecf00): expected_datao=0, payload_size=512 00:23:54.099 [2024-07-25 10:38:57.649665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649672] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649676] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.099 [2024-07-25 10:38:57.649690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.099 [2024-07-25 10:38:57.649695] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649699] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ecf00): datao=0, datal=4096, cccid=7 00:23:54.099 [2024-07-25 10:38:57.649705] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8588c0) on tqpair(0x7ecf00): expected_datao=0, payload_size=4096 00:23:54.099 [2024-07-25 10:38:57.649711] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649723] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649728] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649737] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.099 [2024-07-25 10:38:57.649743] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.099 [2024-07-25 10:38:57.649747] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8585c0) on tqpair=0x7ecf00 00:23:54.099 [2024-07-25 10:38:57.649764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.099 [2024-07-25 10:38:57.649770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.099 [2024-07-25 10:38:57.649775] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x858440) on tqpair=0x7ecf00 00:23:54.099 [2024-07-25 10:38:57.649790] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.099 [2024-07-25 10:38:57.649796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.099 [2024-07-25 10:38:57.649801] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x858740) on tqpair=0x7ecf00 00:23:54.099 [2024-07-25 10:38:57.649813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.099 [2024-07-25 10:38:57.649819] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.099 [2024-07-25 10:38:57.649823] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.099 [2024-07-25 10:38:57.649828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8588c0) on tqpair=0x7ecf00 00:23:54.099 ===================================================== 00:23:54.099 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.099 ===================================================== 00:23:54.099 Controller Capabilities/Features 00:23:54.099 ================================ 00:23:54.099 Vendor ID: 8086 00:23:54.099 Subsystem Vendor ID: 8086 00:23:54.099 Serial Number: SPDK00000000000001 00:23:54.099 Model Number: SPDK bdev Controller 00:23:54.099 Firmware Version: 24.09 00:23:54.099 Recommended Arb Burst: 6 00:23:54.099 IEEE OUI Identifier: e4 d2 5c 00:23:54.099 Multi-path I/O 00:23:54.099 May have multiple subsystem ports: Yes 00:23:54.100 May have multiple controllers: Yes 00:23:54.100 Associated with SR-IOV VF: No 00:23:54.100 Max Data Transfer Size: 131072 00:23:54.100 Max Number of Namespaces: 32 00:23:54.100 Max Number of I/O Queues: 127 00:23:54.100 NVMe Specification Version (VS): 1.3 00:23:54.100 NVMe Specification Version (Identify): 1.3 00:23:54.100 Maximum Queue Entries: 128 00:23:54.100 Contiguous Queues Required: Yes 00:23:54.100 Arbitration Mechanisms Supported 00:23:54.100 Weighted Round Robin: Not Supported 00:23:54.100 Vendor Specific: Not Supported 00:23:54.100 Reset Timeout: 15000 ms 00:23:54.100 Doorbell Stride: 4 bytes 00:23:54.100 NVM Subsystem Reset: Not Supported 00:23:54.100 Command Sets Supported 00:23:54.100 NVM Command Set: Supported 00:23:54.100 Boot Partition: Not Supported 00:23:54.100 Memory Page Size Minimum: 4096 bytes 00:23:54.100 Memory Page Size Maximum: 4096 bytes 00:23:54.100 Persistent Memory Region: Not Supported 00:23:54.100 Optional Asynchronous Events Supported 00:23:54.100 Namespace Attribute Notices: Supported 00:23:54.100 Firmware Activation Notices: Not Supported 00:23:54.100 ANA Change Notices: Not Supported 00:23:54.100 PLE Aggregate Log Change Notices: Not Supported 00:23:54.100 LBA Status Info Alert Notices: Not Supported 00:23:54.100 EGE Aggregate Log Change Notices: Not Supported 00:23:54.100 Normal NVM Subsystem Shutdown event: Not Supported 00:23:54.100 Zone Descriptor Change Notices: Not Supported 00:23:54.100 Discovery Log Change Notices: Not Supported 00:23:54.100 Controller Attributes 00:23:54.100 128-bit Host Identifier: Supported 00:23:54.100 Non-Operational Permissive Mode: Not Supported 00:23:54.100 NVM Sets: Not Supported 00:23:54.100 Read Recovery Levels: Not Supported 00:23:54.100 Endurance Groups: Not Supported 00:23:54.100 Predictable Latency Mode: Not Supported 00:23:54.100 Traffic Based Keep ALive: Not Supported 00:23:54.100 Namespace Granularity: Not Supported 00:23:54.100 SQ Associations: Not Supported 00:23:54.100 UUID List: Not Supported 00:23:54.100 Multi-Domain Subsystem: Not Supported 00:23:54.100 Fixed Capacity Management: Not Supported 00:23:54.100 Variable Capacity Management: Not Supported 00:23:54.100 Delete Endurance Group: Not Supported 00:23:54.100 Delete NVM Set: Not Supported 00:23:54.100 Extended LBA Formats Supported: Not Supported 00:23:54.100 Flexible Data Placement Supported: Not Supported 00:23:54.100 00:23:54.100 Controller Memory Buffer Support 00:23:54.100 ================================ 00:23:54.100 Supported: No 00:23:54.100 00:23:54.100 Persistent Memory Region Support 00:23:54.100 ================================ 00:23:54.100 Supported: No 00:23:54.100 00:23:54.100 Admin Command Set Attributes 00:23:54.100 ============================ 00:23:54.100 Security Send/Receive: Not Supported 00:23:54.100 Format NVM: Not Supported 00:23:54.100 Firmware Activate/Download: Not Supported 00:23:54.100 Namespace Management: Not Supported 00:23:54.100 Device Self-Test: Not Supported 00:23:54.100 Directives: Not Supported 00:23:54.100 NVMe-MI: Not Supported 00:23:54.100 Virtualization Management: Not Supported 00:23:54.100 Doorbell Buffer Config: Not Supported 00:23:54.100 Get LBA Status Capability: Not Supported 00:23:54.100 Command & Feature Lockdown Capability: Not Supported 00:23:54.100 Abort Command Limit: 4 00:23:54.100 Async Event Request Limit: 4 00:23:54.100 Number of Firmware Slots: N/A 00:23:54.100 Firmware Slot 1 Read-Only: N/A 00:23:54.100 Firmware Activation Without Reset: N/A 00:23:54.100 Multiple Update Detection Support: N/A 00:23:54.100 Firmware Update Granularity: No Information Provided 00:23:54.100 Per-Namespace SMART Log: No 00:23:54.100 Asymmetric Namespace Access Log Page: Not Supported 00:23:54.100 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:54.100 Command Effects Log Page: Supported 00:23:54.100 Get Log Page Extended Data: Supported 00:23:54.100 Telemetry Log Pages: Not Supported 00:23:54.100 Persistent Event Log Pages: Not Supported 00:23:54.100 Supported Log Pages Log Page: May Support 00:23:54.100 Commands Supported & Effects Log Page: Not Supported 00:23:54.100 Feature Identifiers & Effects Log Page:May Support 00:23:54.100 NVMe-MI Commands & Effects Log Page: May Support 00:23:54.100 Data Area 4 for Telemetry Log: Not Supported 00:23:54.100 Error Log Page Entries Supported: 128 00:23:54.100 Keep Alive: Supported 00:23:54.100 Keep Alive Granularity: 10000 ms 00:23:54.100 00:23:54.100 NVM Command Set Attributes 00:23:54.100 ========================== 00:23:54.100 Submission Queue Entry Size 00:23:54.100 Max: 64 00:23:54.100 Min: 64 00:23:54.100 Completion Queue Entry Size 00:23:54.100 Max: 16 00:23:54.100 Min: 16 00:23:54.100 Number of Namespaces: 32 00:23:54.100 Compare Command: Supported 00:23:54.100 Write Uncorrectable Command: Not Supported 00:23:54.100 Dataset Management Command: Supported 00:23:54.100 Write Zeroes Command: Supported 00:23:54.100 Set Features Save Field: Not Supported 00:23:54.100 Reservations: Supported 00:23:54.100 Timestamp: Not Supported 00:23:54.100 Copy: Supported 00:23:54.100 Volatile Write Cache: Present 00:23:54.100 Atomic Write Unit (Normal): 1 00:23:54.100 Atomic Write Unit (PFail): 1 00:23:54.100 Atomic Compare & Write Unit: 1 00:23:54.100 Fused Compare & Write: Supported 00:23:54.100 Scatter-Gather List 00:23:54.100 SGL Command Set: Supported 00:23:54.100 SGL Keyed: Supported 00:23:54.100 SGL Bit Bucket Descriptor: Not Supported 00:23:54.100 SGL Metadata Pointer: Not Supported 00:23:54.100 Oversized SGL: Not Supported 00:23:54.100 SGL Metadata Address: Not Supported 00:23:54.100 SGL Offset: Supported 00:23:54.100 Transport SGL Data Block: Not Supported 00:23:54.100 Replay Protected Memory Block: Not Supported 00:23:54.100 00:23:54.100 Firmware Slot Information 00:23:54.100 ========================= 00:23:54.100 Active slot: 1 00:23:54.100 Slot 1 Firmware Revision: 24.09 00:23:54.100 00:23:54.100 00:23:54.100 Commands Supported and Effects 00:23:54.100 ============================== 00:23:54.100 Admin Commands 00:23:54.100 -------------- 00:23:54.100 Get Log Page (02h): Supported 00:23:54.100 Identify (06h): Supported 00:23:54.100 Abort (08h): Supported 00:23:54.100 Set Features (09h): Supported 00:23:54.100 Get Features (0Ah): Supported 00:23:54.100 Asynchronous Event Request (0Ch): Supported 00:23:54.100 Keep Alive (18h): Supported 00:23:54.100 I/O Commands 00:23:54.100 ------------ 00:23:54.100 Flush (00h): Supported LBA-Change 00:23:54.100 Write (01h): Supported LBA-Change 00:23:54.100 Read (02h): Supported 00:23:54.100 Compare (05h): Supported 00:23:54.100 Write Zeroes (08h): Supported LBA-Change 00:23:54.100 Dataset Management (09h): Supported LBA-Change 00:23:54.100 Copy (19h): Supported LBA-Change 00:23:54.100 00:23:54.100 Error Log 00:23:54.100 ========= 00:23:54.100 00:23:54.100 Arbitration 00:23:54.100 =========== 00:23:54.100 Arbitration Burst: 1 00:23:54.100 00:23:54.100 Power Management 00:23:54.100 ================ 00:23:54.100 Number of Power States: 1 00:23:54.100 Current Power State: Power State #0 00:23:54.100 Power State #0: 00:23:54.100 Max Power: 0.00 W 00:23:54.100 Non-Operational State: Operational 00:23:54.100 Entry Latency: Not Reported 00:23:54.100 Exit Latency: Not Reported 00:23:54.100 Relative Read Throughput: 0 00:23:54.100 Relative Read Latency: 0 00:23:54.100 Relative Write Throughput: 0 00:23:54.100 Relative Write Latency: 0 00:23:54.100 Idle Power: Not Reported 00:23:54.100 Active Power: Not Reported 00:23:54.100 Non-Operational Permissive Mode: Not Supported 00:23:54.100 00:23:54.100 Health Information 00:23:54.100 ================== 00:23:54.100 Critical Warnings: 00:23:54.100 Available Spare Space: OK 00:23:54.100 Temperature: OK 00:23:54.100 Device Reliability: OK 00:23:54.100 Read Only: No 00:23:54.100 Volatile Memory Backup: OK 00:23:54.100 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:54.100 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:54.100 Available Spare: 0% 00:23:54.100 Available Spare Threshold: 0% 00:23:54.100 Life Percentage Used:[2024-07-25 10:38:57.649917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.100 [2024-07-25 10:38:57.649923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7ecf00) 00:23:54.100 [2024-07-25 10:38:57.649930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.100 [2024-07-25 10:38:57.649945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8588c0, cid 7, qid 0 00:23:54.100 [2024-07-25 10:38:57.650111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.100 [2024-07-25 10:38:57.650117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.101 [2024-07-25 10:38:57.650122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.650126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8588c0) on tqpair=0x7ecf00 00:23:54.101 [2024-07-25 10:38:57.650157] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:54.101 [2024-07-25 10:38:57.650168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x857e40) on tqpair=0x7ecf00 00:23:54.101 [2024-07-25 10:38:57.650174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.101 [2024-07-25 10:38:57.650181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x857fc0) on tqpair=0x7ecf00 00:23:54.101 [2024-07-25 10:38:57.650186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.101 [2024-07-25 10:38:57.650192] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x858140) on tqpair=0x7ecf00 00:23:54.101 [2024-07-25 10:38:57.650199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.101 [2024-07-25 10:38:57.650205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8582c0) on tqpair=0x7ecf00 00:23:54.101 [2024-07-25 10:38:57.650210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.101 [2024-07-25 10:38:57.650219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.650224] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.650228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ecf00) 00:23:54.101 [2024-07-25 10:38:57.650235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.101 [2024-07-25 10:38:57.650248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8582c0, cid 3, qid 0 00:23:54.101 [2024-07-25 10:38:57.650338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.101 [2024-07-25 10:38:57.650345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.101 [2024-07-25 10:38:57.650349] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.650354] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8582c0) on tqpair=0x7ecf00 00:23:54.101 [2024-07-25 10:38:57.650361] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.650366] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.650370] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ecf00) 00:23:54.101 [2024-07-25 10:38:57.650378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.101 [2024-07-25 10:38:57.650393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8582c0, cid 3, qid 0 00:23:54.101 [2024-07-25 10:38:57.650483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.101 [2024-07-25 10:38:57.650490] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.101 [2024-07-25 10:38:57.650494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.650499] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8582c0) on tqpair=0x7ecf00 00:23:54.101 [2024-07-25 10:38:57.650504] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:54.101 [2024-07-25 10:38:57.650510] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:54.101 [2024-07-25 10:38:57.650521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.650526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.650530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ecf00) 00:23:54.101 [2024-07-25 10:38:57.650537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.101 [2024-07-25 10:38:57.650548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8582c0, cid 3, qid 0 00:23:54.101 [2024-07-25 10:38:57.650709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.101 [2024-07-25 10:38:57.650722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.101 [2024-07-25 10:38:57.650727] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.650731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8582c0) on tqpair=0x7ecf00 00:23:54.101 [2024-07-25 10:38:57.650742] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.650747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.650751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ecf00) 00:23:54.101 [2024-07-25 10:38:57.650758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.101 [2024-07-25 10:38:57.650772] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8582c0, cid 3, qid 0 00:23:54.101 [2024-07-25 10:38:57.650853] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.101 [2024-07-25 10:38:57.650860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.101 [2024-07-25 10:38:57.650864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.650869] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8582c0) on tqpair=0x7ecf00 00:23:54.101 [2024-07-25 10:38:57.650878] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.650883] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.650887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ecf00) 00:23:54.101 [2024-07-25 10:38:57.650894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.101 [2024-07-25 10:38:57.650905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8582c0, cid 3, qid 0 00:23:54.101 [2024-07-25 10:38:57.650987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.101 [2024-07-25 10:38:57.650994] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.101 [2024-07-25 10:38:57.650998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.651003] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8582c0) on tqpair=0x7ecf00 00:23:54.101 [2024-07-25 10:38:57.651012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.651018] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.651022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ecf00) 00:23:54.101 [2024-07-25 10:38:57.651029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.101 [2024-07-25 10:38:57.651040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8582c0, cid 3, qid 0 00:23:54.101 [2024-07-25 10:38:57.651122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.101 [2024-07-25 10:38:57.651129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.101 [2024-07-25 10:38:57.651133] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.651138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8582c0) on tqpair=0x7ecf00 00:23:54.101 [2024-07-25 10:38:57.651148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.651152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.651157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ecf00) 00:23:54.101 [2024-07-25 10:38:57.651164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.101 [2024-07-25 10:38:57.651175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8582c0, cid 3, qid 0 00:23:54.101 [2024-07-25 10:38:57.651256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.101 [2024-07-25 10:38:57.651262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.101 [2024-07-25 10:38:57.651267] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.651271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8582c0) on tqpair=0x7ecf00 00:23:54.101 [2024-07-25 10:38:57.651281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.651286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.651291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ecf00) 00:23:54.101 [2024-07-25 10:38:57.651298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.101 [2024-07-25 10:38:57.651311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8582c0, cid 3, qid 0 00:23:54.101 [2024-07-25 10:38:57.651398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.101 [2024-07-25 10:38:57.651404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.101 [2024-07-25 10:38:57.651409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.101 [2024-07-25 10:38:57.651414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8582c0) on tqpair=0x7ecf00 00:23:54.102 [2024-07-25 10:38:57.651424] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.102 [2024-07-25 10:38:57.651429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.102 [2024-07-25 10:38:57.651433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ecf00) 00:23:54.102 [2024-07-25 10:38:57.651440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.102 [2024-07-25 10:38:57.651451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8582c0, cid 3, qid 0 00:23:54.102 [2024-07-25 10:38:57.651535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.102 [2024-07-25 10:38:57.651542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.102 [2024-07-25 10:38:57.651546] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.102 [2024-07-25 10:38:57.651551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8582c0) on tqpair=0x7ecf00 00:23:54.102 [2024-07-25 10:38:57.651561] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.102 [2024-07-25 10:38:57.651566] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.102 [2024-07-25 10:38:57.651571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ecf00) 00:23:54.102 [2024-07-25 10:38:57.651577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.102 [2024-07-25 10:38:57.651589] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8582c0, cid 3, qid 0 00:23:54.102 [2024-07-25 10:38:57.651670] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.102 [2024-07-25 10:38:57.651677] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.102 [2024-07-25 10:38:57.651681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.102 [2024-07-25 10:38:57.651686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8582c0) on tqpair=0x7ecf00 00:23:54.102 [2024-07-25 10:38:57.651696] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.102 [2024-07-25 10:38:57.651701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.102 [2024-07-25 10:38:57.651705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ecf00) 00:23:54.102 [2024-07-25 10:38:57.651712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.102 [2024-07-25 10:38:57.655733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8582c0, cid 3, qid 0 00:23:54.102 [2024-07-25 10:38:57.655834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.102 [2024-07-25 10:38:57.655841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.102 [2024-07-25 10:38:57.655846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.102 [2024-07-25 10:38:57.655851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8582c0) on tqpair=0x7ecf00 00:23:54.102 [2024-07-25 10:38:57.655860] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:23:54.102 0% 00:23:54.102 Data Units Read: 0 00:23:54.102 Data Units Written: 0 00:23:54.102 Host Read Commands: 0 00:23:54.102 Host Write Commands: 0 00:23:54.102 Controller Busy Time: 0 minutes 00:23:54.102 Power Cycles: 0 00:23:54.102 Power On Hours: 0 hours 00:23:54.102 Unsafe Shutdowns: 0 00:23:54.102 Unrecoverable Media Errors: 0 00:23:54.102 Lifetime Error Log Entries: 0 00:23:54.102 Warning Temperature Time: 0 minutes 00:23:54.102 Critical Temperature Time: 0 minutes 00:23:54.102 00:23:54.102 Number of Queues 00:23:54.102 ================ 00:23:54.102 Number of I/O Submission Queues: 127 00:23:54.102 Number of I/O Completion Queues: 127 00:23:54.102 00:23:54.102 Active Namespaces 00:23:54.102 ================= 00:23:54.102 Namespace ID:1 00:23:54.102 Error Recovery Timeout: Unlimited 00:23:54.102 Command Set Identifier: NVM (00h) 00:23:54.102 Deallocate: Supported 00:23:54.102 Deallocated/Unwritten Error: Not Supported 00:23:54.102 Deallocated Read Value: Unknown 00:23:54.102 Deallocate in Write Zeroes: Not Supported 00:23:54.102 Deallocated Guard Field: 0xFFFF 00:23:54.102 Flush: Supported 00:23:54.102 Reservation: Supported 00:23:54.102 Namespace Sharing Capabilities: Multiple Controllers 00:23:54.102 Size (in LBAs): 131072 (0GiB) 00:23:54.102 Capacity (in LBAs): 131072 (0GiB) 00:23:54.102 Utilization (in LBAs): 131072 (0GiB) 00:23:54.102 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:54.102 EUI64: ABCDEF0123456789 00:23:54.102 UUID: 815371a9-6ee8-49c8-88ef-8dc5c3fde0fa 00:23:54.102 Thin Provisioning: Not Supported 00:23:54.102 Per-NS Atomic Units: Yes 00:23:54.102 Atomic Boundary Size (Normal): 0 00:23:54.102 Atomic Boundary Size (PFail): 0 00:23:54.102 Atomic Boundary Offset: 0 00:23:54.102 Maximum Single Source Range Length: 65535 00:23:54.102 Maximum Copy Length: 65535 00:23:54.102 Maximum Source Range Count: 1 00:23:54.102 NGUID/EUI64 Never Reused: No 00:23:54.102 Namespace Write Protected: No 00:23:54.102 Number of LBA Formats: 1 00:23:54.102 Current LBA Format: LBA Format #00 00:23:54.102 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:54.102 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.102 rmmod nvme_tcp 00:23:54.102 rmmod nvme_fabrics 00:23:54.102 rmmod nvme_keyring 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3973429 ']' 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3973429 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3973429 ']' 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3973429 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.102 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3973429 00:23:54.362 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:54.362 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:54.362 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3973429' 00:23:54.362 killing process with pid 3973429 00:23:54.362 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3973429 00:23:54.362 10:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3973429 00:23:54.362 10:38:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:54.362 10:38:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:54.362 10:38:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:54.362 10:38:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:54.362 10:38:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:54.362 10:38:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.362 10:38:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.362 10:38:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:56.900 00:23:56.900 real 0m10.470s 00:23:56.900 user 0m7.870s 00:23:56.900 sys 0m5.478s 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:56.900 ************************************ 00:23:56.900 END TEST nvmf_identify 00:23:56.900 ************************************ 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.900 ************************************ 00:23:56.900 START TEST nvmf_perf 00:23:56.900 ************************************ 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:56.900 * Looking for test storage... 00:23:56.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:56.900 10:39:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:03.472 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:03.472 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.472 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:03.472 Found net devices under 0000:af:00.0: cvl_0_0 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:03.473 Found net devices under 0000:af:00.1: cvl_0_1 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:03.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:24:03.473 00:24:03.473 --- 10.0.0.2 ping statistics --- 00:24:03.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.473 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:24:03.473 00:24:03.473 --- 10.0.0.1 ping statistics --- 00:24:03.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.473 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3977915 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3977915 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3977915 ']' 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:03.473 10:39:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:03.473 [2024-07-25 10:39:06.908391] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:24:03.473 [2024-07-25 10:39:06.908441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.473 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.473 [2024-07-25 10:39:06.982720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.473 [2024-07-25 10:39:07.056335] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.473 [2024-07-25 10:39:07.056374] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.473 [2024-07-25 10:39:07.056383] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.473 [2024-07-25 10:39:07.056392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.473 [2024-07-25 10:39:07.056399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.473 [2024-07-25 10:39:07.056445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.473 [2024-07-25 10:39:07.056541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.473 [2024-07-25 10:39:07.056625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.473 [2024-07-25 10:39:07.056627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.044 10:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:04.044 10:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:24:04.044 10:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:04.044 10:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:04.044 10:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:04.303 10:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.303 10:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:04.303 10:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:07.591 10:39:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:07.591 10:39:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:07.591 10:39:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:24:07.591 10:39:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:07.591 10:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:07.591 10:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:24:07.591 10:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:07.591 10:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:07.591 10:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:07.850 [2024-07-25 10:39:11.353527] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.850 10:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:07.850 10:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:07.850 10:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:08.109 10:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:08.109 10:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:08.368 10:39:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.368 [2024-07-25 10:39:12.065538] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.627 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:08.627 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:24:08.627 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:24:08.627 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:08.627 10:39:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:24:10.003 Initializing NVMe Controllers 00:24:10.003 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:24:10.003 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:24:10.003 Initialization complete. Launching workers. 00:24:10.003 ======================================================== 00:24:10.003 Latency(us) 00:24:10.003 Device Information : IOPS MiB/s Average min max 00:24:10.003 PCIE (0000:d8:00.0) NSID 1 from core 0: 101924.27 398.14 313.46 39.23 4260.02 00:24:10.003 ======================================================== 00:24:10.003 Total : 101924.27 398.14 313.46 39.23 4260.02 00:24:10.003 00:24:10.003 10:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:10.003 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.378 Initializing NVMe Controllers 00:24:11.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:11.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:11.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:11.378 Initialization complete. Launching workers. 00:24:11.378 ======================================================== 00:24:11.378 Latency(us) 00:24:11.378 Device Information : IOPS MiB/s Average min max 00:24:11.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 90.00 0.35 11161.42 243.68 45583.43 00:24:11.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17938.13 6984.92 47900.17 00:24:11.378 ======================================================== 00:24:11.379 Total : 146.00 0.57 13760.71 243.68 47900.17 00:24:11.379 00:24:11.379 10:39:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:11.379 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.314 Initializing NVMe Controllers 00:24:12.314 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:12.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:12.314 Initialization complete. Launching workers. 00:24:12.314 ======================================================== 00:24:12.314 Latency(us) 00:24:12.314 Device Information : IOPS MiB/s Average min max 00:24:12.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10101.92 39.46 3169.76 592.99 9146.87 00:24:12.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3866.59 15.10 8316.29 4104.53 18383.69 00:24:12.314 ======================================================== 00:24:12.314 Total : 13968.51 54.56 4594.36 592.99 18383.69 00:24:12.314 00:24:12.314 10:39:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:12.314 10:39:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:12.314 10:39:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:12.572 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.121 Initializing NVMe Controllers 00:24:15.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.121 Controller IO queue size 128, less than required. 00:24:15.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.121 Controller IO queue size 128, less than required. 00:24:15.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:15.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:15.121 Initialization complete. Launching workers. 00:24:15.121 ======================================================== 00:24:15.121 Latency(us) 00:24:15.121 Device Information : IOPS MiB/s Average min max 00:24:15.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 952.41 238.10 137286.84 77060.01 224814.15 00:24:15.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 614.44 153.61 218210.48 62179.63 342723.88 00:24:15.121 ======================================================== 00:24:15.121 Total : 1566.86 391.71 169021.10 62179.63 342723.88 00:24:15.121 00:24:15.121 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:15.121 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.121 No valid NVMe controllers or AIO or URING devices found 00:24:15.121 Initializing NVMe Controllers 00:24:15.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.121 Controller IO queue size 128, less than required. 00:24:15.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.121 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:15.121 Controller IO queue size 128, less than required. 00:24:15.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.121 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:15.121 WARNING: Some requested NVMe devices were skipped 00:24:15.379 10:39:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:15.379 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.915 Initializing NVMe Controllers 00:24:17.915 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:17.915 Controller IO queue size 128, less than required. 00:24:17.915 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.915 Controller IO queue size 128, less than required. 00:24:17.915 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:17.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:17.915 Initialization complete. Launching workers. 00:24:17.915 00:24:17.915 ==================== 00:24:17.915 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:17.915 TCP transport: 00:24:17.915 polls: 42648 00:24:17.915 idle_polls: 14546 00:24:17.915 sock_completions: 28102 00:24:17.915 nvme_completions: 3829 00:24:17.915 submitted_requests: 5702 00:24:17.915 queued_requests: 1 00:24:17.915 00:24:17.915 ==================== 00:24:17.915 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:17.915 TCP transport: 00:24:17.915 polls: 40387 00:24:17.915 idle_polls: 11778 00:24:17.915 sock_completions: 28609 00:24:17.915 nvme_completions: 4387 00:24:17.915 submitted_requests: 6666 00:24:17.915 queued_requests: 1 00:24:17.915 ======================================================== 00:24:17.915 Latency(us) 00:24:17.915 Device Information : IOPS MiB/s Average min max 00:24:17.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 957.00 239.25 138314.68 74930.99 215181.14 00:24:17.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1096.50 274.12 119795.27 54802.83 174539.88 00:24:17.915 ======================================================== 00:24:17.915 Total : 2053.49 513.37 128425.93 54802.83 215181.14 00:24:17.915 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:17.915 rmmod nvme_tcp 00:24:17.915 rmmod nvme_fabrics 00:24:17.915 rmmod nvme_keyring 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3977915 ']' 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3977915 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3977915 ']' 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3977915 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3977915 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3977915' 00:24:17.915 killing process with pid 3977915 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3977915 00:24:17.915 10:39:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3977915 00:24:20.465 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:20.465 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:20.465 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:20.465 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:20.465 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:20.465 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.465 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.465 10:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.370 10:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:22.370 00:24:22.370 real 0m25.499s 00:24:22.370 user 1m6.189s 00:24:22.370 sys 0m8.387s 00:24:22.370 10:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:22.370 10:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:22.370 ************************************ 00:24:22.370 END TEST nvmf_perf 00:24:22.370 ************************************ 00:24:22.370 10:39:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:22.370 10:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:22.370 10:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:22.370 10:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.370 ************************************ 00:24:22.370 START TEST nvmf_fio_host 00:24:22.370 ************************************ 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:22.371 * Looking for test storage... 00:24:22.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:22.371 10:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:28.937 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:28.937 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:28.937 Found net devices under 0000:af:00.0: cvl_0_0 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:28.937 Found net devices under 0000:af:00.1: cvl_0_1 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:28.937 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:28.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:24:28.938 00:24:28.938 --- 10.0.0.2 ping statistics --- 00:24:28.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.938 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:24:28.938 00:24:28.938 --- 10.0.0.1 ping statistics --- 00:24:28.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.938 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3984286 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3984286 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3984286 ']' 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:28.938 10:39:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.938 [2024-07-25 10:39:32.564494] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:24:28.938 [2024-07-25 10:39:32.564542] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.938 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.938 [2024-07-25 10:39:32.638622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:29.220 [2024-07-25 10:39:32.713177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.220 [2024-07-25 10:39:32.713214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.220 [2024-07-25 10:39:32.713223] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.220 [2024-07-25 10:39:32.713232] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.220 [2024-07-25 10:39:32.713238] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.220 [2024-07-25 10:39:32.713333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.220 [2024-07-25 10:39:32.713446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.220 [2024-07-25 10:39:32.713534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.220 [2024-07-25 10:39:32.713536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.789 10:39:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:29.789 10:39:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:29.789 10:39:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:30.052 [2024-07-25 10:39:33.544260] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.052 10:39:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:30.052 10:39:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:30.052 10:39:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.052 10:39:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:30.311 Malloc1 00:24:30.311 10:39:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:30.311 10:39:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:30.569 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:30.827 [2024-07-25 10:39:34.333497] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.827 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:31.110 10:39:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:31.374 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:31.374 fio-3.35 00:24:31.374 Starting 1 thread 00:24:31.374 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.897 00:24:33.897 test: (groupid=0, jobs=1): err= 0: pid=3984846: Thu Jul 25 10:39:37 2024 00:24:33.897 read: IOPS=12.4k, BW=48.3MiB/s (50.7MB/s)(96.9MiB/2005msec) 00:24:33.897 slat (nsec): min=1520, max=244546, avg=1657.89, stdev=2221.55 00:24:33.897 clat (usec): min=3261, max=11614, avg=5748.46, stdev=482.51 00:24:33.897 lat (usec): min=3297, max=11620, avg=5750.12, stdev=482.67 00:24:33.897 clat percentiles (usec): 00:24:33.897 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:24:33.897 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5800], 00:24:33.897 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6259], 95.00th=[ 6390], 00:24:33.897 | 99.00th=[ 7046], 99.50th=[ 7832], 99.90th=[ 9765], 99.95th=[10814], 00:24:33.897 | 99.99th=[11338] 00:24:33.897 bw ( KiB/s): min=48504, max=50128, per=100.00%, avg=49488.00, stdev=696.90, samples=4 00:24:33.897 iops : min=12126, max=12532, avg=12372.00, stdev=174.23, samples=4 00:24:33.897 write: IOPS=12.4k, BW=48.3MiB/s (50.6MB/s)(96.8MiB/2005msec); 0 zone resets 00:24:33.897 slat (nsec): min=1578, max=239805, avg=1758.94, stdev=1696.80 00:24:33.897 clat (usec): min=2505, max=8937, avg=4575.87, stdev=372.50 00:24:33.897 lat (usec): min=2520, max=8938, avg=4577.63, stdev=372.54 00:24:33.897 clat percentiles (usec): 00:24:33.897 | 1.00th=[ 3687], 5.00th=[ 4015], 10.00th=[ 4146], 20.00th=[ 4293], 00:24:33.897 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4686], 00:24:33.897 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 5014], 95.00th=[ 5080], 00:24:33.897 | 99.00th=[ 5407], 99.50th=[ 5669], 99.90th=[ 7635], 99.95th=[ 8356], 00:24:33.897 | 99.99th=[ 8848] 00:24:33.897 bw ( KiB/s): min=49160, max=49856, per=99.97%, avg=49426.00, stdev=299.50, samples=4 00:24:33.897 iops : min=12290, max=12464, avg=12356.50, stdev=74.88, samples=4 00:24:33.897 lat (msec) : 4=2.48%, 10=97.48%, 20=0.04% 00:24:33.897 cpu : usr=62.38%, sys=30.79%, ctx=51, majf=0, minf=5 00:24:33.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:33.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:33.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:33.897 issued rwts: total=24801,24783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:33.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:33.897 00:24:33.897 Run status group 0 (all jobs): 00:24:33.897 READ: bw=48.3MiB/s (50.7MB/s), 48.3MiB/s-48.3MiB/s (50.7MB/s-50.7MB/s), io=96.9MiB (102MB), run=2005-2005msec 00:24:33.897 WRITE: bw=48.3MiB/s (50.6MB/s), 48.3MiB/s-48.3MiB/s (50.6MB/s-50.6MB/s), io=96.8MiB (102MB), run=2005-2005msec 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:33.897 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:33.898 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:33.898 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:33.898 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:33.898 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:33.898 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:33.898 10:39:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:34.154 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:34.154 fio-3.35 00:24:34.154 Starting 1 thread 00:24:34.154 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.744 00:24:36.745 test: (groupid=0, jobs=1): err= 0: pid=3985376: Thu Jul 25 10:39:40 2024 00:24:36.745 read: IOPS=10.6k, BW=166MiB/s (174MB/s)(333MiB/2005msec) 00:24:36.745 slat (usec): min=2, max=166, avg= 2.79, stdev= 2.11 00:24:36.745 clat (usec): min=1542, max=21454, avg=7277.95, stdev=2091.87 00:24:36.745 lat (usec): min=1545, max=21457, avg=7280.73, stdev=2092.21 00:24:36.745 clat percentiles (usec): 00:24:36.745 | 1.00th=[ 3523], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 5538], 00:24:36.745 | 30.00th=[ 6063], 40.00th=[ 6521], 50.00th=[ 7046], 60.00th=[ 7570], 00:24:36.745 | 70.00th=[ 8160], 80.00th=[ 8848], 90.00th=[ 9896], 95.00th=[10814], 00:24:36.745 | 99.00th=[14091], 99.50th=[14746], 99.90th=[15926], 99.95th=[16057], 00:24:36.745 | 99.99th=[16319] 00:24:36.745 bw ( KiB/s): min=79392, max=96319, per=50.01%, avg=85095.75, stdev=7826.49, samples=4 00:24:36.745 iops : min= 4962, max= 6019, avg=5318.25, stdev=488.71, samples=4 00:24:36.745 write: IOPS=6294, BW=98.3MiB/s (103MB/s)(174MiB/1765msec); 0 zone resets 00:24:36.745 slat (usec): min=28, max=390, avg=30.85, stdev= 9.06 00:24:36.745 clat (usec): min=2772, max=22858, avg=8369.91, stdev=1752.62 00:24:36.745 lat (usec): min=2800, max=22887, avg=8400.75, stdev=1756.03 00:24:36.745 clat percentiles (usec): 00:24:36.745 | 1.00th=[ 5538], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7111], 00:24:36.745 | 30.00th=[ 7373], 40.00th=[ 7701], 50.00th=[ 8094], 60.00th=[ 8455], 00:24:36.745 | 70.00th=[ 8848], 80.00th=[ 9372], 90.00th=[10421], 95.00th=[11469], 00:24:36.745 | 99.00th=[15008], 99.50th=[15664], 99.90th=[16909], 99.95th=[16909], 00:24:36.745 | 99.99th=[19268] 00:24:36.745 bw ( KiB/s): min=83040, max=100151, per=87.86%, avg=88477.75, stdev=8000.95, samples=4 00:24:36.745 iops : min= 5190, max= 6259, avg=5529.75, stdev=499.85, samples=4 00:24:36.745 lat (msec) : 2=0.02%, 4=1.84%, 10=87.63%, 20=10.51%, 50=0.01% 00:24:36.745 cpu : usr=82.24%, sys=14.57%, ctx=38, majf=0, minf=2 00:24:36.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:36.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:36.745 issued rwts: total=21324,11109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:36.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:36.745 00:24:36.745 Run status group 0 (all jobs): 00:24:36.745 READ: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=333MiB (349MB), run=2005-2005msec 00:24:36.745 WRITE: bw=98.3MiB/s (103MB/s), 98.3MiB/s-98.3MiB/s (103MB/s-103MB/s), io=174MiB (182MB), run=1765-1765msec 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:36.745 rmmod nvme_tcp 00:24:36.745 rmmod nvme_fabrics 00:24:36.745 rmmod nvme_keyring 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3984286 ']' 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3984286 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3984286 ']' 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3984286 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3984286 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3984286' 00:24:36.745 killing process with pid 3984286 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3984286 00:24:36.745 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3984286 00:24:37.004 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:37.004 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:37.004 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:37.004 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:37.004 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:37.004 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.004 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.004 10:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:39.538 00:24:39.538 real 0m16.938s 00:24:39.538 user 0m52.549s 00:24:39.538 sys 0m7.580s 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.538 ************************************ 00:24:39.538 END TEST nvmf_fio_host 00:24:39.538 ************************************ 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.538 ************************************ 00:24:39.538 START TEST nvmf_failover 00:24:39.538 ************************************ 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:39.538 * Looking for test storage... 00:24:39.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.538 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:39.539 10:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:46.107 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:46.107 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.107 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:46.108 Found net devices under 0000:af:00.0: cvl_0_0 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:46.108 Found net devices under 0000:af:00.1: cvl_0_1 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.108 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:46.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:24:46.367 00:24:46.367 --- 10.0.0.2 ping statistics --- 00:24:46.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.367 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:24:46.367 00:24:46.367 --- 10.0.0.1 ping statistics --- 00:24:46.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.367 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3989576 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3989576 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3989576 ']' 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:46.367 10:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:46.367 [2024-07-25 10:39:49.922380] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:24:46.367 [2024-07-25 10:39:49.922425] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.367 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.367 [2024-07-25 10:39:49.993876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:46.626 [2024-07-25 10:39:50.072328] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.626 [2024-07-25 10:39:50.072368] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.626 [2024-07-25 10:39:50.072378] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.626 [2024-07-25 10:39:50.072387] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.626 [2024-07-25 10:39:50.072395] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.626 [2024-07-25 10:39:50.072451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.626 [2024-07-25 10:39:50.072534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:46.626 [2024-07-25 10:39:50.072536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.194 10:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.194 10:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:47.194 10:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:47.194 10:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:47.194 10:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.194 10:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.194 10:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:47.453 [2024-07-25 10:39:50.932884] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.453 10:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:47.453 Malloc0 00:24:47.711 10:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:47.711 10:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:47.970 10:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.228 [2024-07-25 10:39:51.686417] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.228 10:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:48.228 [2024-07-25 10:39:51.862870] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:48.228 10:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:48.487 [2024-07-25 10:39:52.031417] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:48.487 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:48.487 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3989878 00:24:48.487 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:48.487 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3989878 /var/tmp/bdevperf.sock 00:24:48.487 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3989878 ']' 00:24:48.487 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.487 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:48.487 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.487 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:48.487 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:49.421 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:49.421 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:49.422 10:39:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:49.680 NVMe0n1 00:24:49.680 10:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:49.937 00:24:49.937 10:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:49.937 10:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3990143 00:24:49.937 10:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:50.871 10:39:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.129 [2024-07-25 10:39:54.609034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.129 [2024-07-25 10:39:54.609109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.129 [2024-07-25 10:39:54.609120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.129 [2024-07-25 10:39:54.609129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.129 [2024-07-25 10:39:54.609138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.129 [2024-07-25 10:39:54.609147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.129 [2024-07-25 10:39:54.609155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.129 [2024-07-25 10:39:54.609164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.129 [2024-07-25 10:39:54.609173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.129 [2024-07-25 10:39:54.609182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.129 [2024-07-25 10:39:54.609191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.129 [2024-07-25 10:39:54.609199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.129 [2024-07-25 10:39:54.609208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.130 [2024-07-25 10:39:54.609956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.609965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.609975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.609983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.609993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 [2024-07-25 10:39:54.610186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d1e0 is same with the state(5) to be set 00:24:51.131 10:39:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:54.415 10:39:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:54.415 00:24:54.415 10:39:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:54.415 [2024-07-25 10:39:58.070079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8df60 is same with the state(5) to be set 00:24:54.415 [2024-07-25 10:39:58.070138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8df60 is same with the state(5) to be set 00:24:54.415 [2024-07-25 10:39:58.070149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8df60 is same with the state(5) to be set 00:24:54.415 [2024-07-25 10:39:58.070158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8df60 is same with the state(5) to be set 00:24:54.415 [2024-07-25 10:39:58.070168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8df60 is same with the state(5) to be set 00:24:54.415 [2024-07-25 10:39:58.070176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8df60 is same with the state(5) to be set 00:24:54.415 [2024-07-25 10:39:58.070185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8df60 is same with the state(5) to be set 00:24:54.415 10:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:57.698 10:40:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.698 [2024-07-25 10:40:01.260432] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.698 10:40:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:58.631 10:40:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:58.955 [2024-07-25 10:40:02.455989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.955 [2024-07-25 10:40:02.456499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.456995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 [2024-07-25 10:40:02.457151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ed50 is same with the state(5) to be set 00:24:58.956 10:40:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3990143 00:25:05.524 0 00:25:05.524 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3989878 00:25:05.524 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3989878 ']' 00:25:05.524 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3989878 00:25:05.524 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:05.524 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:05.524 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3989878 00:25:05.524 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:05.524 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:05.524 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3989878' 00:25:05.524 killing process with pid 3989878 00:25:05.524 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3989878 00:25:05.524 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3989878 00:25:05.524 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:05.524 [2024-07-25 10:39:52.090041] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:25:05.524 [2024-07-25 10:39:52.090094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3989878 ] 00:25:05.524 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.524 [2024-07-25 10:39:52.159465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.524 [2024-07-25 10:39:52.231268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.524 Running I/O for 15 seconds... 00:25:05.524 [2024-07-25 10:39:54.610555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.524 [2024-07-25 10:39:54.610593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.524 [2024-07-25 10:39:54.610611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.524 [2024-07-25 10:39:54.610621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.524 [2024-07-25 10:39:54.610632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.610981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.610990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.525 [2024-07-25 10:39:54.611415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.525 [2024-07-25 10:39:54.611426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.611988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.611997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.612008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.612017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.612028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.526 [2024-07-25 10:39:54.612037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.612048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.526 [2024-07-25 10:39:54.612057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.612067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.526 [2024-07-25 10:39:54.612078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.612088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.526 [2024-07-25 10:39:54.612098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.612108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.526 [2024-07-25 10:39:54.612118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.612128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.526 [2024-07-25 10:39:54.612137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.612147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.526 [2024-07-25 10:39:54.612157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.612167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.526 [2024-07-25 10:39:54.612177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.526 [2024-07-25 10:39:54.612187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.527 [2024-07-25 10:39:54.612968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.527 [2024-07-25 10:39:54.612978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:54.612988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.528 [2024-07-25 10:39:54.612996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:54.613023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:05.528 [2024-07-25 10:39:54.613033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102664 len:8 PRP1 0x0 PRP2 0x0 00:25:05.528 [2024-07-25 10:39:54.613042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:54.613054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:05.528 [2024-07-25 10:39:54.613061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:05.528 [2024-07-25 10:39:54.613069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102672 len:8 PRP1 0x0 PRP2 0x0 00:25:05.528 [2024-07-25 10:39:54.613078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:54.613088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:05.528 [2024-07-25 10:39:54.613095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:05.528 [2024-07-25 10:39:54.613102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102680 len:8 PRP1 0x0 PRP2 0x0 00:25:05.528 [2024-07-25 10:39:54.613112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:54.613122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:05.528 [2024-07-25 10:39:54.613129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:05.528 [2024-07-25 10:39:54.613137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102688 len:8 PRP1 0x0 PRP2 0x0 00:25:05.528 [2024-07-25 10:39:54.613145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:54.613154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:05.528 [2024-07-25 10:39:54.613161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:05.528 [2024-07-25 10:39:54.613170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102696 len:8 PRP1 0x0 PRP2 0x0 00:25:05.528 [2024-07-25 10:39:54.613181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:54.613190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:05.528 [2024-07-25 10:39:54.613200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:05.528 [2024-07-25 10:39:54.613207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102704 len:8 PRP1 0x0 PRP2 0x0 00:25:05.528 [2024-07-25 10:39:54.613216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:54.613225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:05.528 [2024-07-25 10:39:54.613232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:05.528 [2024-07-25 10:39:54.613240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102712 len:8 PRP1 0x0 PRP2 0x0 00:25:05.528 [2024-07-25 10:39:54.613249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:54.613258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:05.528 [2024-07-25 10:39:54.613265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:05.528 [2024-07-25 10:39:54.613272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102272 len:8 PRP1 0x0 PRP2 0x0 00:25:05.528 [2024-07-25 10:39:54.613281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:54.613324] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x943990 was disconnected and freed. reset controller. 00:25:05.528 [2024-07-25 10:39:54.613335] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:05.528 [2024-07-25 10:39:54.613357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.528 [2024-07-25 10:39:54.613366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:54.613376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.528 [2024-07-25 10:39:54.613385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:54.613395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.528 [2024-07-25 10:39:54.613404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:54.627483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.528 [2024-07-25 10:39:54.627501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:54.627513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:05.528 [2024-07-25 10:39:54.627548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x950590 (9): Bad file descriptor 00:25:05.528 [2024-07-25 10:39:54.631168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:05.528 [2024-07-25 10:39:54.697351] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:05.528 [2024-07-25 10:39:58.070464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.528 [2024-07-25 10:39:58.070504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.528 [2024-07-25 10:39:58.070530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.528 [2024-07-25 10:39:58.070551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.528 [2024-07-25 10:39:58.070571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.528 [2024-07-25 10:39:58.070591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.528 [2024-07-25 10:39:58.070611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.528 [2024-07-25 10:39:58.070631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.528 [2024-07-25 10:39:58.070651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.528 [2024-07-25 10:39:58.070671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.528 [2024-07-25 10:39:58.070690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.528 [2024-07-25 10:39:58.070719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.528 [2024-07-25 10:39:58.070739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.528 [2024-07-25 10:39:58.070758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.528 [2024-07-25 10:39:58.070778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.528 [2024-07-25 10:39:58.070798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.528 [2024-07-25 10:39:58.070818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.528 [2024-07-25 10:39:58.070829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.528 [2024-07-25 10:39:58.070840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.070850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.070859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.070870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.070880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.070891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.070900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.070910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.070920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.070930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.070939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.070951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.070962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.070973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.070982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.070992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.529 [2024-07-25 10:39:58.071061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.529 [2024-07-25 10:39:58.071456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.529 [2024-07-25 10:39:58.071468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.071988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.071997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.072008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.072017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.072027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.072036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.072047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.072056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.072067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.072076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.072086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.072095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.072105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.072114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.072125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.072134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.072145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.072154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.072164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.072173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.072184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.072193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.072204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.072213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.530 [2024-07-25 10:39:58.072223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.530 [2024-07-25 10:39:58.072234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.531 [2024-07-25 10:39:58.072927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.531 [2024-07-25 10:39:58.072946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.531 [2024-07-25 10:39:58.072966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.531 [2024-07-25 10:39:58.072987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.072997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.531 [2024-07-25 10:39:58.073007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.531 [2024-07-25 10:39:58.073018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.532 [2024-07-25 10:39:58.073028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:39:58.073038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.532 [2024-07-25 10:39:58.073047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:39:58.073067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:05.532 [2024-07-25 10:39:58.073075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:05.532 [2024-07-25 10:39:58.073083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65984 len:8 PRP1 0x0 PRP2 0x0 00:25:05.532 [2024-07-25 10:39:58.073093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:39:58.073136] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9743b0 was disconnected and freed. reset controller. 00:25:05.532 [2024-07-25 10:39:58.073147] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:05.532 [2024-07-25 10:39:58.073169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.532 [2024-07-25 10:39:58.073179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:39:58.073189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.532 [2024-07-25 10:39:58.073198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:39:58.073207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.532 [2024-07-25 10:39:58.073216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:39:58.073226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.532 [2024-07-25 10:39:58.073235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:39:58.073244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:05.532 [2024-07-25 10:39:58.075950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:05.532 [2024-07-25 10:39:58.075983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x950590 (9): Bad file descriptor 00:25:05.532 [2024-07-25 10:39:58.105322] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:05.532 [2024-07-25 10:40:02.457320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.532 [2024-07-25 10:40:02.457930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.532 [2024-07-25 10:40:02.457940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.457949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.457960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.457969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.457980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.457989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.533 [2024-07-25 10:40:02.458607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.533 [2024-07-25 10:40:02.458615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.458983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.458992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.534 [2024-07-25 10:40:02.459375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.534 [2024-07-25 10:40:02.459384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.535 [2024-07-25 10:40:02.459720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.535 [2024-07-25 10:40:02.459739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.535 [2024-07-25 10:40:02.459759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.535 [2024-07-25 10:40:02.459778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.535 [2024-07-25 10:40:02.459797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.535 [2024-07-25 10:40:02.459816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.535 [2024-07-25 10:40:02.459835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.535 [2024-07-25 10:40:02.459855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9743b0 is same with the state(5) to be set 00:25:05.535 [2024-07-25 10:40:02.459877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:05.535 [2024-07-25 10:40:02.459884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:05.535 [2024-07-25 10:40:02.459892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97544 len:8 PRP1 0x0 PRP2 0x0 00:25:05.535 [2024-07-25 10:40:02.459901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.459946] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9743b0 was disconnected and freed. reset controller. 00:25:05.535 [2024-07-25 10:40:02.459958] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:05.535 [2024-07-25 10:40:02.459980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.535 [2024-07-25 10:40:02.459991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.460000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.535 [2024-07-25 10:40:02.460009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.460019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.535 [2024-07-25 10:40:02.460027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.460037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.535 [2024-07-25 10:40:02.460046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.535 [2024-07-25 10:40:02.460057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:05.535 [2024-07-25 10:40:02.460079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x950590 (9): Bad file descriptor 00:25:05.535 [2024-07-25 10:40:02.462774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:05.535 [2024-07-25 10:40:02.587122] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:05.535 00:25:05.535 Latency(us) 00:25:05.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.535 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:05.535 Verification LBA range: start 0x0 length 0x4000 00:25:05.535 NVMe0n1 : 15.01 11898.38 46.48 709.22 0.00 10131.59 822.48 25480.40 00:25:05.535 =================================================================================================================== 00:25:05.535 Total : 11898.38 46.48 709.22 0.00 10131.59 822.48 25480.40 00:25:05.535 Received shutdown signal, test time was about 15.000000 seconds 00:25:05.535 00:25:05.535 Latency(us) 00:25:05.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.535 =================================================================================================================== 00:25:05.535 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.535 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:05.535 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:05.535 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:05.535 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3992769 00:25:05.535 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:05.536 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3992769 /var/tmp/bdevperf.sock 00:25:05.536 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3992769 ']' 00:25:05.536 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:05.536 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.536 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:05.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:05.536 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.536 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:06.101 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.101 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:06.101 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:06.361 [2024-07-25 10:40:09.841502] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:06.361 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:06.361 [2024-07-25 10:40:10.017988] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:06.361 10:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:06.620 NVMe0n1 00:25:06.878 10:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:06.878 00:25:07.136 10:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:07.394 00:25:07.394 10:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:07.394 10:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:07.394 10:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:07.652 10:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:10.984 10:40:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:10.985 10:40:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:10.985 10:40:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:10.985 10:40:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3993580 00:25:10.985 10:40:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3993580 00:25:11.922 0 00:25:11.922 10:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:11.922 [2024-07-25 10:40:08.882898] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:25:11.922 [2024-07-25 10:40:08.882951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3992769 ] 00:25:11.922 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.922 [2024-07-25 10:40:08.953785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.922 [2024-07-25 10:40:09.019348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.922 [2024-07-25 10:40:11.214360] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:11.922 [2024-07-25 10:40:11.214408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.922 [2024-07-25 10:40:11.214421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.922 [2024-07-25 10:40:11.214432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.922 [2024-07-25 10:40:11.214442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.922 [2024-07-25 10:40:11.214452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.922 [2024-07-25 10:40:11.214461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.922 [2024-07-25 10:40:11.214471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.922 [2024-07-25 10:40:11.214480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.922 [2024-07-25 10:40:11.214493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.922 [2024-07-25 10:40:11.214518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.922 [2024-07-25 10:40:11.214534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2a590 (9): Bad file descriptor 00:25:11.922 [2024-07-25 10:40:11.225425] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.922 Running I/O for 1 seconds... 00:25:11.922 00:25:11.922 Latency(us) 00:25:11.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.922 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:11.922 Verification LBA range: start 0x0 length 0x4000 00:25:11.922 NVMe0n1 : 1.00 11777.68 46.01 0.00 0.00 10826.36 1009.25 13841.20 00:25:11.922 =================================================================================================================== 00:25:11.922 Total : 11777.68 46.01 0.00 0.00 10826.36 1009.25 13841.20 00:25:11.922 10:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:11.922 10:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:12.181 10:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:12.440 10:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:12.440 10:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:12.440 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:12.699 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:15.989 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:15.989 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:15.989 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3992769 00:25:15.989 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3992769 ']' 00:25:15.989 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3992769 00:25:15.989 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:15.989 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:15.989 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3992769 00:25:15.989 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:15.989 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:15.989 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3992769' 00:25:15.989 killing process with pid 3992769 00:25:15.989 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3992769 00:25:15.989 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3992769 00:25:15.989 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:15.989 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:16.248 rmmod nvme_tcp 00:25:16.248 rmmod nvme_fabrics 00:25:16.248 rmmod nvme_keyring 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3989576 ']' 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3989576 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3989576 ']' 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3989576 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:16.248 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3989576 00:25:16.506 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:16.506 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:16.506 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3989576' 00:25:16.506 killing process with pid 3989576 00:25:16.506 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3989576 00:25:16.506 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3989576 00:25:16.506 10:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:16.506 10:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:16.506 10:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:16.506 10:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:16.506 10:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:16.506 10:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.506 10:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.506 10:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.043 10:40:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:19.043 00:25:19.043 real 0m39.480s 00:25:19.043 user 2m1.131s 00:25:19.043 sys 0m10.041s 00:25:19.043 10:40:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:19.043 10:40:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:19.043 ************************************ 00:25:19.043 END TEST nvmf_failover 00:25:19.044 ************************************ 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.044 ************************************ 00:25:19.044 START TEST nvmf_host_discovery 00:25:19.044 ************************************ 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:19.044 * Looking for test storage... 00:25:19.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:19.044 10:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:25.653 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:25.653 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:25.653 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:25.654 Found net devices under 0000:af:00.0: cvl_0_0 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:25.654 Found net devices under 0000:af:00.1: cvl_0_1 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:25.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:25.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:25:25.654 00:25:25.654 --- 10.0.0.2 ping statistics --- 00:25:25.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.654 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:25.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:25.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:25:25.654 00:25:25.654 --- 10.0.0.1 ping statistics --- 00:25:25.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.654 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3998059 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3998059 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3998059 ']' 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.654 10:40:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:25.654 [2024-07-25 10:40:28.818055] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:25:25.654 [2024-07-25 10:40:28.818102] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.654 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.654 [2024-07-25 10:40:28.891823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.654 [2024-07-25 10:40:28.964093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.654 [2024-07-25 10:40:28.964129] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.654 [2024-07-25 10:40:28.964139] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.654 [2024-07-25 10:40:28.964147] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.654 [2024-07-25 10:40:28.964155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.654 [2024-07-25 10:40:28.964174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.223 [2024-07-25 10:40:29.665032] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.223 [2024-07-25 10:40:29.673168] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.223 null0 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.223 null1 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3998331 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3998331 /tmp/host.sock 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3998331 ']' 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:26.223 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.223 10:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:26.223 [2024-07-25 10:40:29.747470] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:25:26.223 [2024-07-25 10:40:29.747515] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998331 ] 00:25:26.223 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.223 [2024-07-25 10:40:29.815669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.223 [2024-07-25 10:40:29.885233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.160 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.161 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.161 [2024-07-25 10:40:30.860277] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.420 10:40:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.420 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:27.420 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:27.421 10:40:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:27.988 [2024-07-25 10:40:31.562901] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:27.988 [2024-07-25 10:40:31.562921] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:27.988 [2024-07-25 10:40:31.562934] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:27.988 [2024-07-25 10:40:31.649188] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:28.246 [2024-07-25 10:40:31.871739] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:28.246 [2024-07-25 10:40:31.871761] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.505 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.506 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.765 [2024-07-25 10:40:32.344345] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:28.765 [2024-07-25 10:40:32.345376] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:28.765 [2024-07-25 10:40:32.345398] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:28.765 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.766 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.025 [2024-07-25 10:40:32.472980] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:29.025 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:29.025 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:29.025 [2024-07-25 10:40:32.579655] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:29.025 [2024-07-25 10:40:32.579672] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:29.025 [2024-07-25 10:40:32.579679] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.993 [2024-07-25 10:40:33.604561] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:29.993 [2024-07-25 10:40:33.604581] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:29.993 [2024-07-25 10:40:33.613613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.993 [2024-07-25 10:40:33.613631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.993 [2024-07-25 10:40:33.613642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.993 [2024-07-25 10:40:33.613652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.993 [2024-07-25 10:40:33.613661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.993 [2024-07-25 10:40:33.613671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.993 [2024-07-25 10:40:33.613680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.993 [2024-07-25 10:40:33.613689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.993 [2024-07-25 10:40:33.613699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c77fd0 is same with the state(5) to be set 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.993 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:29.994 [2024-07-25 10:40:33.623627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c77fd0 (9): Bad file descriptor 00:25:29.994 [2024-07-25 10:40:33.633664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:29.994 [2024-07-25 10:40:33.633937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.994 [2024-07-25 10:40:33.633953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c77fd0 with addr=10.0.0.2, port=4420 00:25:29.994 [2024-07-25 10:40:33.633963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c77fd0 is same with the state(5) to be set 00:25:29.994 [2024-07-25 10:40:33.633977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c77fd0 (9): Bad file descriptor 00:25:29.994 [2024-07-25 10:40:33.633989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:29.994 [2024-07-25 10:40:33.633998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:29.994 [2024-07-25 10:40:33.634007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:29.994 [2024-07-25 10:40:33.634019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.994 [2024-07-25 10:40:33.643722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:29.994 [2024-07-25 10:40:33.644063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.994 [2024-07-25 10:40:33.644077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c77fd0 with addr=10.0.0.2, port=4420 00:25:29.994 [2024-07-25 10:40:33.644086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c77fd0 is same with the state(5) to be set 00:25:29.994 [2024-07-25 10:40:33.644100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c77fd0 (9): Bad file descriptor 00:25:29.994 [2024-07-25 10:40:33.644112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:29.994 [2024-07-25 10:40:33.644120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:29.994 [2024-07-25 10:40:33.644129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:29.994 [2024-07-25 10:40:33.644140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.994 [2024-07-25 10:40:33.653774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:29.994 [2024-07-25 10:40:33.654148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.994 [2024-07-25 10:40:33.654163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c77fd0 with addr=10.0.0.2, port=4420 00:25:29.994 [2024-07-25 10:40:33.654173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c77fd0 is same with the state(5) to be set 00:25:29.994 [2024-07-25 10:40:33.654185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c77fd0 (9): Bad file descriptor 00:25:29.994 [2024-07-25 10:40:33.654198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:29.994 [2024-07-25 10:40:33.654209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:29.994 [2024-07-25 10:40:33.654218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:29.994 [2024-07-25 10:40:33.654229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:29.994 [2024-07-25 10:40:33.663829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:29.994 [2024-07-25 10:40:33.664167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.994 [2024-07-25 10:40:33.664183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c77fd0 with addr=10.0.0.2, port=4420 00:25:29.994 [2024-07-25 10:40:33.664192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c77fd0 is same with the state(5) to be set 00:25:29.994 [2024-07-25 10:40:33.664204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c77fd0 (9): Bad file descriptor 00:25:29.994 [2024-07-25 10:40:33.664219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:29.994 [2024-07-25 10:40:33.664228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:29.994 [2024-07-25 10:40:33.664237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:29.994 [2024-07-25 10:40:33.664248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.994 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.994 [2024-07-25 10:40:33.673882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:29.994 [2024-07-25 10:40:33.674159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.994 [2024-07-25 10:40:33.674175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c77fd0 with addr=10.0.0.2, port=4420 00:25:29.994 [2024-07-25 10:40:33.674185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c77fd0 is same with the state(5) to be set 00:25:29.994 [2024-07-25 10:40:33.674198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c77fd0 (9): Bad file descriptor 00:25:29.994 [2024-07-25 10:40:33.674210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:29.994 [2024-07-25 10:40:33.674218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:29.994 [2024-07-25 10:40:33.674230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:29.994 [2024-07-25 10:40:33.674241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.994 [2024-07-25 10:40:33.683939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:29.994 [2024-07-25 10:40:33.684200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.994 [2024-07-25 10:40:33.684213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c77fd0 with addr=10.0.0.2, port=4420 00:25:29.994 [2024-07-25 10:40:33.684222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c77fd0 is same with the state(5) to be set 00:25:29.994 [2024-07-25 10:40:33.684235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c77fd0 (9): Bad file descriptor 00:25:29.994 [2024-07-25 10:40:33.684246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:29.994 [2024-07-25 10:40:33.684254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:29.994 [2024-07-25 10:40:33.684263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:29.994 [2024-07-25 10:40:33.684274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.994 [2024-07-25 10:40:33.692838] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:29.994 [2024-07-25 10:40:33.692854] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:30.253 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.254 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.513 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.513 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:30.513 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:30.513 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:30.513 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:30.513 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:30.513 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.513 10:40:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.450 [2024-07-25 10:40:34.991447] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:31.450 [2024-07-25 10:40:34.991465] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:31.450 [2024-07-25 10:40:34.991476] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:31.450 [2024-07-25 10:40:35.079744] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:31.709 [2024-07-25 10:40:35.351853] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:31.709 [2024-07-25 10:40:35.351880] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.709 request: 00:25:31.709 { 00:25:31.709 "name": "nvme", 00:25:31.709 "trtype": "tcp", 00:25:31.709 "traddr": "10.0.0.2", 00:25:31.709 "adrfam": "ipv4", 00:25:31.709 "trsvcid": "8009", 00:25:31.709 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:31.709 "wait_for_attach": true, 00:25:31.709 "method": "bdev_nvme_start_discovery", 00:25:31.709 "req_id": 1 00:25:31.709 } 00:25:31.709 Got JSON-RPC error response 00:25:31.709 response: 00:25:31.709 { 00:25:31.709 "code": -17, 00:25:31.709 "message": "File exists" 00:25:31.709 } 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:31.709 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.968 request: 00:25:31.968 { 00:25:31.968 "name": "nvme_second", 00:25:31.968 "trtype": "tcp", 00:25:31.968 "traddr": "10.0.0.2", 00:25:31.968 "adrfam": "ipv4", 00:25:31.968 "trsvcid": "8009", 00:25:31.968 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:31.968 "wait_for_attach": true, 00:25:31.968 "method": "bdev_nvme_start_discovery", 00:25:31.968 "req_id": 1 00:25:31.968 } 00:25:31.968 Got JSON-RPC error response 00:25:31.968 response: 00:25:31.968 { 00:25:31.968 "code": -17, 00:25:31.968 "message": "File exists" 00:25:31.968 } 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:31.968 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.969 10:40:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.345 [2024-07-25 10:40:36.615471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.345 [2024-07-25 10:40:36.615500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78860 with addr=10.0.0.2, port=8010 00:25:33.345 [2024-07-25 10:40:36.615516] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:33.345 [2024-07-25 10:40:36.615524] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:33.345 [2024-07-25 10:40:36.615532] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:34.281 [2024-07-25 10:40:37.617894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.281 [2024-07-25 10:40:37.617919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78860 with addr=10.0.0.2, port=8010 00:25:34.281 [2024-07-25 10:40:37.617932] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:34.281 [2024-07-25 10:40:37.617956] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:34.281 [2024-07-25 10:40:37.617964] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:35.219 [2024-07-25 10:40:38.619947] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:35.219 request: 00:25:35.219 { 00:25:35.219 "name": "nvme_second", 00:25:35.219 "trtype": "tcp", 00:25:35.219 "traddr": "10.0.0.2", 00:25:35.219 "adrfam": "ipv4", 00:25:35.219 "trsvcid": "8010", 00:25:35.219 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:35.219 "wait_for_attach": false, 00:25:35.219 "attach_timeout_ms": 3000, 00:25:35.219 "method": "bdev_nvme_start_discovery", 00:25:35.219 "req_id": 1 00:25:35.219 } 00:25:35.219 Got JSON-RPC error response 00:25:35.219 response: 00:25:35.219 { 00:25:35.219 "code": -110, 00:25:35.219 "message": "Connection timed out" 00:25:35.219 } 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3998331 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:35.219 rmmod nvme_tcp 00:25:35.219 rmmod nvme_fabrics 00:25:35.219 rmmod nvme_keyring 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3998059 ']' 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3998059 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3998059 ']' 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3998059 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3998059 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3998059' 00:25:35.219 killing process with pid 3998059 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3998059 00:25:35.219 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3998059 00:25:35.479 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:35.479 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:35.479 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:35.479 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:35.479 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:35.479 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.479 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.479 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.383 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:37.383 00:25:37.383 real 0m18.714s 00:25:37.383 user 0m22.081s 00:25:37.383 sys 0m6.722s 00:25:37.383 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:37.383 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.383 ************************************ 00:25:37.383 END TEST nvmf_host_discovery 00:25:37.383 ************************************ 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.643 ************************************ 00:25:37.643 START TEST nvmf_host_multipath_status 00:25:37.643 ************************************ 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:37.643 * Looking for test storage... 00:25:37.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:37.643 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:37.644 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:37.644 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:37.644 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:37.644 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:37.644 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.644 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:37.644 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:37.644 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:37.644 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.644 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.644 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.644 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:37.644 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:37.644 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:37.644 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:44.253 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:44.253 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:44.253 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:44.254 Found net devices under 0000:af:00.0: cvl_0_0 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:44.254 Found net devices under 0000:af:00.1: cvl_0_1 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:44.254 10:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:44.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:44.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:25:44.513 00:25:44.513 --- 10.0.0.2 ping statistics --- 00:25:44.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.513 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:44.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:44.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:25:44.513 00:25:44.513 --- 10.0.0.1 ping statistics --- 00:25:44.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.513 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=4003509 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 4003509 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 4003509 ']' 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:44.513 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:44.513 [2024-07-25 10:40:48.140424] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:25:44.513 [2024-07-25 10:40:48.140470] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.513 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.513 [2024-07-25 10:40:48.213070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:44.771 [2024-07-25 10:40:48.291887] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:44.771 [2024-07-25 10:40:48.291923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:44.771 [2024-07-25 10:40:48.291933] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:44.771 [2024-07-25 10:40:48.291942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:44.771 [2024-07-25 10:40:48.291951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:44.771 [2024-07-25 10:40:48.292002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.771 [2024-07-25 10:40:48.292005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.338 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:45.338 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:45.338 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:45.338 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:45.338 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:45.338 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.338 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4003509 00:25:45.338 10:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:45.597 [2024-07-25 10:40:49.132661] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.597 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:45.856 Malloc0 00:25:45.856 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:45.856 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:46.114 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.114 [2024-07-25 10:40:49.818406] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.373 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:46.373 [2024-07-25 10:40:49.994864] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:46.373 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4003852 00:25:46.373 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:46.373 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:46.373 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4003852 /var/tmp/bdevperf.sock 00:25:46.373 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 4003852 ']' 00:25:46.373 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:46.373 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:46.373 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:46.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:46.373 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:46.373 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:47.309 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:47.309 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:47.309 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:47.567 10:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:47.826 Nvme0n1 00:25:47.826 10:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:48.393 Nvme0n1 00:25:48.393 10:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:48.393 10:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:50.295 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:50.295 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:50.554 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:50.554 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:51.930 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:51.930 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:51.930 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.930 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:51.930 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.930 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:51.930 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.930 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:52.188 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:52.188 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:52.188 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.188 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:52.188 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.188 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:52.188 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.188 10:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:52.446 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.446 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:52.446 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.446 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:52.704 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.704 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:52.704 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.704 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:52.704 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.704 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:52.704 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:52.963 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:53.221 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:54.156 10:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:54.156 10:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:54.156 10:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.156 10:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:54.414 10:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.414 10:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:54.414 10:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.414 10:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:54.674 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.674 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:54.674 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.674 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:54.674 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.674 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:54.674 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.674 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:54.932 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.932 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:54.932 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.932 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:55.190 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.190 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:55.190 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.190 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:55.449 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.449 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:55.449 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:55.449 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:55.708 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:56.699 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:56.699 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:56.699 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.699 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:56.958 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.958 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:56.958 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.958 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:56.958 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.958 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:56.958 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.958 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:57.216 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.216 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:57.216 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.216 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:57.475 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.475 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:57.475 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.475 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:57.733 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.733 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:57.733 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.733 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:57.733 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.733 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:57.733 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:57.992 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:58.251 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:59.187 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:59.187 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:59.187 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.187 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:59.446 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.446 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:59.446 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.446 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:59.446 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:59.446 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:59.446 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.446 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:59.705 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.705 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:59.705 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.705 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:59.964 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.964 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:59.964 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.964 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:00.222 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.222 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:00.222 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:00.222 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.222 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:00.222 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:00.222 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:00.482 10:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:00.741 10:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:01.677 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:01.677 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:01.677 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.677 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:01.935 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:01.936 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:01.936 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.936 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:01.936 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:01.936 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:01.936 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.936 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:02.194 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.194 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:02.194 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:02.195 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.453 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.453 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:02.453 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.453 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:02.453 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:02.453 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:02.453 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.453 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:02.713 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:02.713 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:02.713 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:02.971 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:02.971 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:04.349 10:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:04.349 10:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:04.349 10:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.349 10:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:04.349 10:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.350 10:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:04.350 10:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.350 10:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:04.350 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.350 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:04.350 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.350 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:04.609 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.609 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:04.609 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.609 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:04.868 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.868 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:04.868 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.868 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.127 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.127 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:05.127 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.127 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:05.127 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.127 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:05.385 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:05.385 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:05.642 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:05.642 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:07.018 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:07.018 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:07.018 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.018 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.018 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.018 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:07.018 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.018 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.018 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.018 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.018 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.018 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.276 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.276 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.276 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.277 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:07.535 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.535 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:07.536 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.536 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:07.536 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.536 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:07.536 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.536 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:07.794 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.794 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:07.794 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:08.053 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:08.311 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:09.247 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:09.247 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:09.247 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.247 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:09.506 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.506 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:09.506 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.506 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:09.506 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.506 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:09.506 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.506 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:09.764 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.764 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:09.764 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.764 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.022 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.022 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:10.022 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.022 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:10.314 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.314 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:10.314 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.314 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:10.314 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.314 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:10.314 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:10.574 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:10.832 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:11.768 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:11.768 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:11.768 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.768 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.027 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.027 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:12.027 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.027 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:12.027 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.027 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:12.027 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.027 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:12.285 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.285 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:12.286 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.286 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:12.544 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.544 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:12.544 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.544 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:12.544 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.545 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:12.545 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.545 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:12.804 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.804 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:12.804 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:13.062 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:13.321 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:14.259 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:14.259 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:14.259 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.259 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:14.518 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.518 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:14.518 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.518 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:14.518 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.518 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:14.518 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.518 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:14.777 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.777 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:14.777 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.777 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.036 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.036 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:15.036 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.036 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4003852 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 4003852 ']' 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 4003852 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4003852 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4003852' 00:26:15.296 killing process with pid 4003852 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 4003852 00:26:15.296 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 4003852 00:26:15.579 Connection closed with partial response: 00:26:15.579 00:26:15.579 00:26:15.579 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4003852 00:26:15.579 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:15.579 [2024-07-25 10:40:50.059851] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:26:15.579 [2024-07-25 10:40:50.059912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4003852 ] 00:26:15.579 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.579 [2024-07-25 10:40:50.127681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.579 [2024-07-25 10:40:50.197914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:15.579 Running I/O for 90 seconds... 00:26:15.579 [2024-07-25 10:41:04.048639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.579 [2024-07-25 10:41:04.048682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:15.579 [2024-07-25 10:41:04.048704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.579 [2024-07-25 10:41:04.048719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:15.579 [2024-07-25 10:41:04.048735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.579 [2024-07-25 10:41:04.048745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:15.579 [2024-07-25 10:41:04.048760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.579 [2024-07-25 10:41:04.048769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:15.579 [2024-07-25 10:41:04.048783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.579 [2024-07-25 10:41:04.048793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:15.579 [2024-07-25 10:41:04.048808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.579 [2024-07-25 10:41:04.048817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:15.579 [2024-07-25 10:41:04.048831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.579 [2024-07-25 10:41:04.048842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.579 [2024-07-25 10:41:04.048856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.579 [2024-07-25 10:41:04.048866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:15.579 [2024-07-25 10:41:04.048880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.579 [2024-07-25 10:41:04.048892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:15.579 [2024-07-25 10:41:04.048907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.579 [2024-07-25 10:41:04.048917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:15.579 [2024-07-25 10:41:04.048931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.579 [2024-07-25 10:41:04.048946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:15.579 [2024-07-25 10:41:04.048960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.579 [2024-07-25 10:41:04.048971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:15.579 [2024-07-25 10:41:04.048987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.579 [2024-07-25 10:41:04.048998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:15.579 [2024-07-25 10:41:04.049013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.579 [2024-07-25 10:41:04.049022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:15.579 [2024-07-25 10:41:04.049037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.579 [2024-07-25 10:41:04.049048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.049988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.049998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.580 [2024-07-25 10:41:04.050467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:15.580 [2024-07-25 10:41:04.050481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.050989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.050999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.051023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.051047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.051071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.051095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.051118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.051143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.051167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.051191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.051215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.051240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.051265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.581 [2024-07-25 10:41:04.051723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.581 [2024-07-25 10:41:04.051749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.581 [2024-07-25 10:41:04.051774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.581 [2024-07-25 10:41:04.051799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.581 [2024-07-25 10:41:04.051824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:15.581 [2024-07-25 10:41:04.051839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.581 [2024-07-25 10:41:04.051848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.051863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.582 [2024-07-25 10:41:04.051872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.051887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.582 [2024-07-25 10:41:04.051896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.051911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.051921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.051935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.051945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.051960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.051972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.051986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.051996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.582 [2024-07-25 10:41:04.052505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.582 [2024-07-25 10:41:04.052528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.582 [2024-07-25 10:41:04.052553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.582 [2024-07-25 10:41:04.052579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.582 [2024-07-25 10:41:04.052603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.582 [2024-07-25 10:41:04.052628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.582 [2024-07-25 10:41:04.052651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.582 [2024-07-25 10:41:04.052676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.582 [2024-07-25 10:41:04.052728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:15.582 [2024-07-25 10:41:04.052793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.582 [2024-07-25 10:41:04.052802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.052816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.052826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.052840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.052850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.053977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.053987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.054001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.054012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.054027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.583 [2024-07-25 10:41:04.054036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:15.583 [2024-07-25 10:41:04.054051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.054060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.054075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.054084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.054099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.054108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.054123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.054132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.054147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.054158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.054172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.054181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.054196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.054206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.054220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.054230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.054244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.054254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.054269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.054279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.054293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.054303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.054318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.054328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.054343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.054352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.054366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.054376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.054391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:15.584 [2024-07-25 10:41:04.065565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.584 [2024-07-25 10:41:04.065574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.065588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.065598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.065612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.065621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.065636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.065645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.065660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.065669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.065683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.065694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.065708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.065728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.065743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.065756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.065772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.065783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.065798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.065807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.065822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.065832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.065847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.065857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.065871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.065881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.066583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.066610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.585 [2024-07-25 10:41:04.066636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.585 [2024-07-25 10:41:04.066660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.585 [2024-07-25 10:41:04.066684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.585 [2024-07-25 10:41:04.066709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.585 [2024-07-25 10:41:04.066744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.585 [2024-07-25 10:41:04.066768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.585 [2024-07-25 10:41:04.066793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.066817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.066841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.066867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.066891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.066916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.066939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.066964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.066978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.066987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.067001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.067011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.067025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.067035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.067051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.067060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.067075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.067084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.067099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.067108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.067123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.067132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.067147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.067156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.067171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.067180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.067194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.067204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:15.585 [2024-07-25 10:41:04.067219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.585 [2024-07-25 10:41:04.067228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.067253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.067277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.067302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.067325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.067351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.067375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.586 [2024-07-25 10:41:04.067399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.586 [2024-07-25 10:41:04.067423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.586 [2024-07-25 10:41:04.067447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.586 [2024-07-25 10:41:04.067471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.586 [2024-07-25 10:41:04.067495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.586 [2024-07-25 10:41:04.067519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.586 [2024-07-25 10:41:04.067544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.586 [2024-07-25 10:41:04.067569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.067594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.586 [2024-07-25 10:41:04.067619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.067645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.067670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.067694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.067709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.067722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.586 [2024-07-25 10:41:04.068524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:15.586 [2024-07-25 10:41:04.068539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.068989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.068999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.587 [2024-07-25 10:41:04.069906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:15.587 [2024-07-25 10:41:04.069921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.069930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.069947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.069957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.069971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.069981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.069997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.588 [2024-07-25 10:41:04.070552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.588 [2024-07-25 10:41:04.070577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.588 [2024-07-25 10:41:04.070602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.588 [2024-07-25 10:41:04.070626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.588 [2024-07-25 10:41:04.070651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.588 [2024-07-25 10:41:04.070675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.588 [2024-07-25 10:41:04.070699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.588 [2024-07-25 10:41:04.070728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:15.588 [2024-07-25 10:41:04.070743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.070752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.070767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.070777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.070794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.070804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.070818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.070828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.070842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.070852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.070868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.070878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.070892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.070902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.070916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.070925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.070941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.589 [2024-07-25 10:41:04.077739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.589 [2024-07-25 10:41:04.077763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.589 [2024-07-25 10:41:04.077788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.589 [2024-07-25 10:41:04.077812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.589 [2024-07-25 10:41:04.077836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.589 [2024-07-25 10:41:04.077862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.589 [2024-07-25 10:41:04.077886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.589 [2024-07-25 10:41:04.077910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.589 [2024-07-25 10:41:04.077958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.077983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.077997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.078007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.078022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.078032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.078653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.078670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.078687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.589 [2024-07-25 10:41:04.078697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:15.589 [2024-07-25 10:41:04.078711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.078727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.078742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.078752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.078767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.078782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.078797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.078807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.078822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.078831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.078846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.078856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.078870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.078880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.078899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.078909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.078923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.078933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.078948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.078958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.078972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.078982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.078997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.590 [2024-07-25 10:41:04.079678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:15.590 [2024-07-25 10:41:04.079693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.079704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.079724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.079734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.079749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.079759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.079774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.079785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.079800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.079809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.079825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.079835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.079851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.079861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.079876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.079886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.079901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.079910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.079925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.079935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.080988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.080999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.081014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.081024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.081038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.081049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:15.591 [2024-07-25 10:41:04.081065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.591 [2024-07-25 10:41:04.081075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-07-25 10:41:04.081198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-07-25 10:41:04.081223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-07-25 10:41:04.081248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-07-25 10:41:04.081272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-07-25 10:41:04.081297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-07-25 10:41:04.081321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-07-25 10:41:04.081347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.081883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.081893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.082383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.082397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.082414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.592 [2024-07-25 10:41:04.082424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.082440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-07-25 10:41:04.082452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.082468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-07-25 10:41:04.082477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:15.592 [2024-07-25 10:41:04.082493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.593 [2024-07-25 10:41:04.082502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.593 [2024-07-25 10:41:04.082527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.593 [2024-07-25 10:41:04.082551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.593 [2024-07-25 10:41:04.082576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.593 [2024-07-25 10:41:04.082601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.593 [2024-07-25 10:41:04.082626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.082650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.593 [2024-07-25 10:41:04.082675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.082699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.082730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.082758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.082782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.082807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.082831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.082856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.082880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.082905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.082929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.082953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.082978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.082992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.083002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.083017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.083026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.083041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.083050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.083067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.083077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.083092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.083101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.083116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.083126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.083140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.083150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.083165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.083175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.083190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.083199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.083214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.083224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.083238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.083248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.083262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.083272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.083286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.083296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.083311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.083320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.083335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.593 [2024-07-25 10:41:04.083345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:15.593 [2024-07-25 10:41:04.083363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.083985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.083995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.084525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.084540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.084557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.084567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.084582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.084592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.084606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.084616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.084631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.084641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.084656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.084665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.084680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.084690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.084704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.084719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.084737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.084747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.084762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.084772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.084786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.084797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.084817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.084827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.084842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.594 [2024-07-25 10:41:04.084852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:15.594 [2024-07-25 10:41:04.084867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.084877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.084892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.084902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.084916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.084926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.084941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.084950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.084965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.084975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.084990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.084999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.595 [2024-07-25 10:41:04.085400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.595 [2024-07-25 10:41:04.085427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.595 [2024-07-25 10:41:04.085453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.595 [2024-07-25 10:41:04.085478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.595 [2024-07-25 10:41:04.085502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.595 [2024-07-25 10:41:04.085527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.595 [2024-07-25 10:41:04.085552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.595 [2024-07-25 10:41:04.085832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:15.595 [2024-07-25 10:41:04.085847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.085857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.085872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.085882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.085897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.085907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.085921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.085931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.085945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.085955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.085970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.085980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.085995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.086004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.086029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.086055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.086079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.086594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.086620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.086645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.596 [2024-07-25 10:41:04.086670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.596 [2024-07-25 10:41:04.086696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.596 [2024-07-25 10:41:04.086726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.596 [2024-07-25 10:41:04.086751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.596 [2024-07-25 10:41:04.086776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.596 [2024-07-25 10:41:04.086802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.596 [2024-07-25 10:41:04.086826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.596 [2024-07-25 10:41:04.086853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.086877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.596 [2024-07-25 10:41:04.086903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.086928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.086953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.086977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.086991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.087001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.087016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.087026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.087041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.087051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.087066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.087075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.087090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.087099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.087114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.087124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.087140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.087150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.087167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.087177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.087191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.087201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.087216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.087225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.087240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.087250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.087264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.087274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.087290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.596 [2024-07-25 10:41:04.087299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.596 [2024-07-25 10:41:04.087314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.087982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.087992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.088007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.088017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.088032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.088042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.088056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.088066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.088082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.088092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.088107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.597 [2024-07-25 10:41:04.088116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.597 [2024-07-25 10:41:04.088132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.088142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.088161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.088171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.088186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.088195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.088743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.088759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.088776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.088786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.088801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.088811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.088826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.088836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.088851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.088861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.088876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.088886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.088901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.088911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.088928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.088938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.088953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.088962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.088978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.088988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.598 [2024-07-25 10:41:04.089608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.598 [2024-07-25 10:41:04.089632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:15.598 [2024-07-25 10:41:04.089646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.599 [2024-07-25 10:41:04.089656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.089671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.599 [2024-07-25 10:41:04.089680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.089695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.599 [2024-07-25 10:41:04.089705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.089730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.599 [2024-07-25 10:41:04.089740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.089755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.599 [2024-07-25 10:41:04.089765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.089779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.599 [2024-07-25 10:41:04.089789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.089804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.089813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.089830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.089840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.089856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.089866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.089881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.089891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.089906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.089916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.089931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.089940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.089955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.089965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.089980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.089990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.090883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.599 [2024-07-25 10:41:04.090908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.599 [2024-07-25 10:41:04.090932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.599 [2024-07-25 10:41:04.090961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.090977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.599 [2024-07-25 10:41:04.090986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.091001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.599 [2024-07-25 10:41:04.091011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.091026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.599 [2024-07-25 10:41:04.091036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.091051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.599 [2024-07-25 10:41:04.091061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.091075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.599 [2024-07-25 10:41:04.091085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.091100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.599 [2024-07-25 10:41:04.091110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:15.599 [2024-07-25 10:41:04.091126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.600 [2024-07-25 10:41:04.091136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.091985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.091995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.092010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.092020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.092035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.092045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.092059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.092069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.092084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.600 [2024-07-25 10:41:04.092094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:15.600 [2024-07-25 10:41:04.092109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.092119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.092134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.092144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.092159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.092169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.092184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.092195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.092210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.092219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.092234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.092244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.092259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.092270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.092285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.092295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.092309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.092320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.092348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.092358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.092373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.092383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.092397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.092407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.096986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.096996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.097011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.097023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.097037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.097048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.097062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.097072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.097087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.097097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.097111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.097122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.097137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.097147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.097162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.097171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.097186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.601 [2024-07-25 10:41:04.097196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:15.601 [2024-07-25 10:41:04.097211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.602 [2024-07-25 10:41:04.097545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.602 [2024-07-25 10:41:04.097570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.602 [2024-07-25 10:41:04.097594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.602 [2024-07-25 10:41:04.097619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.602 [2024-07-25 10:41:04.097645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.602 [2024-07-25 10:41:04.097670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.602 [2024-07-25 10:41:04.097694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.097985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.097995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.098010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.098019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.098034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.098044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.098059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.098069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.098084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.098093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:15.602 [2024-07-25 10:41:04.098108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.602 [2024-07-25 10:41:04.098132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.098147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.098156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.098171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.098181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.098690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.098704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.098726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.098737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.098752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.098764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.098779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.098789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.098804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.098814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.098829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.603 [2024-07-25 10:41:04.098838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.098854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.603 [2024-07-25 10:41:04.098864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.098879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.603 [2024-07-25 10:41:04.098889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.098903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.603 [2024-07-25 10:41:04.098913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.098928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.603 [2024-07-25 10:41:04.098937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.098952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.603 [2024-07-25 10:41:04.098962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.098976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.603 [2024-07-25 10:41:04.098986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.603 [2024-07-25 10:41:04.099011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.603 [2024-07-25 10:41:04.099062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:15.603 [2024-07-25 10:41:04.099592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.603 [2024-07-25 10:41:04.099602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.099626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.099650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.099676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.099701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.099730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.099754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.099779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.099803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.099828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.099853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.099877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.099903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.099927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.099951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.099978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.099993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.100980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.100990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.101005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.101015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.101030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.101039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.101054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.101064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.101078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.101088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:15.604 [2024-07-25 10:41:04.101103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.604 [2024-07-25 10:41:04.101114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.605 [2024-07-25 10:41:04.101791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.605 [2024-07-25 10:41:04.101816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.605 [2024-07-25 10:41:04.101840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.605 [2024-07-25 10:41:04.101865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.605 [2024-07-25 10:41:04.101889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.605 [2024-07-25 10:41:04.101914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.605 [2024-07-25 10:41:04.101939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.101978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.101988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.102003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.102013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.102027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.102037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.102053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.102063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:15.605 [2024-07-25 10:41:04.102078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.605 [2024-07-25 10:41:04.102087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.102988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.102998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.103022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.606 [2024-07-25 10:41:04.103048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.606 [2024-07-25 10:41:04.103073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.606 [2024-07-25 10:41:04.103098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.606 [2024-07-25 10:41:04.103122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.606 [2024-07-25 10:41:04.103150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.606 [2024-07-25 10:41:04.103175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.606 [2024-07-25 10:41:04.103199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.606 [2024-07-25 10:41:04.103224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.103249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.606 [2024-07-25 10:41:04.103273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.103298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.103322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.103347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.103371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.606 [2024-07-25 10:41:04.103396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:15.606 [2024-07-25 10:41:04.103410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.103985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.103995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.104020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.104044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.104070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.104095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.104119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.104144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.104168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.104192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.104216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.104241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.104266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.104290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.104315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.104339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.607 [2024-07-25 10:41:04.104364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:15.607 [2024-07-25 10:41:04.104380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.104390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.104405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.104414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.104430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.104440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.104455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.104465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.104480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.104489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.608 [2024-07-25 10:41:04.105859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:15.608 [2024-07-25 10:41:04.105874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.105884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.105898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.105908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.105922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.105932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.105947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.105956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.105971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.609 [2024-07-25 10:41:04.105981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.105995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.609 [2024-07-25 10:41:04.106005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.609 [2024-07-25 10:41:04.106029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.609 [2024-07-25 10:41:04.106054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.609 [2024-07-25 10:41:04.106078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.609 [2024-07-25 10:41:04.106102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.609 [2024-07-25 10:41:04.106129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.106540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.106550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.107047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.107062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.107077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.107088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.107103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.107112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.107128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.107137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.107153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.107162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.107177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.107187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.107202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.609 [2024-07-25 10:41:04.107214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.107229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.609 [2024-07-25 10:41:04.107239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.107254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.609 [2024-07-25 10:41:04.107263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.107278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.609 [2024-07-25 10:41:04.107288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.107303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.609 [2024-07-25 10:41:04.107312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:15.609 [2024-07-25 10:41:04.107327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.610 [2024-07-25 10:41:04.107337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.610 [2024-07-25 10:41:04.107362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.610 [2024-07-25 10:41:04.107386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.610 [2024-07-25 10:41:04.107411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.610 [2024-07-25 10:41:04.107460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.107977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.107987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.108002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.108012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.108027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.108037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.108051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.108061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.108075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.108084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.108099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.108109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.108123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.108135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.108150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.610 [2024-07-25 10:41:04.108159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:15.610 [2024-07-25 10:41:04.108174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.108642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.108652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:15.611 [2024-07-25 10:41:04.109633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.611 [2024-07-25 10:41:04.109643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.109657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.109667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.109682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.109691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.109706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.109724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.109739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.109749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.109764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.109773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.109788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.109798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.109813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.109823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.109837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.109847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.109862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.109874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.109888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.109899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.109913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.109923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.109938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.109948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.109963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.109973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.109987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.109997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.612 [2024-07-25 10:41:04.110168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.612 [2024-07-25 10:41:04.110194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.612 [2024-07-25 10:41:04.110218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.612 [2024-07-25 10:41:04.110243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.612 [2024-07-25 10:41:04.110267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.612 [2024-07-25 10:41:04.110292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.612 [2024-07-25 10:41:04.110317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:15.612 [2024-07-25 10:41:04.110603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.612 [2024-07-25 10:41:04.110613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.110627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.110637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.110652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.110662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.110676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.110687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.110703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.110713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.110883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.110895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.110924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.110934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.110953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.110966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.110985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.110994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.613 [2024-07-25 10:41:04.111137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.613 [2024-07-25 10:41:04.111165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.613 [2024-07-25 10:41:04.111194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.613 [2024-07-25 10:41:04.111223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.613 [2024-07-25 10:41:04.111252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.613 [2024-07-25 10:41:04.111280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.613 [2024-07-25 10:41:04.111309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.613 [2024-07-25 10:41:04.111341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.613 [2024-07-25 10:41:04.111398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.111988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.111998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.112016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.112026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.613 [2024-07-25 10:41:04.112044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.613 [2024-07-25 10:41:04.112054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.112907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.112917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:04.113052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:04.113064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:16.789945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:16.789986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:16.790020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:16.790031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:16.790046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:16.790056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:16.790071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:16.790081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:16.790096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:16.790105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:16.790120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:16.790131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:16.790145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:16.790154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:15.614 [2024-07-25 10:41:16.790170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.614 [2024-07-25 10:41:16.790179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.790195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.790205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.790862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.790882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.790899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.790913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.790929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.790939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.790954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.790963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.790978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.790988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.615 [2024-07-25 10:41:16.791424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:15.615 [2024-07-25 10:41:16.791634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.615 [2024-07-25 10:41:16.791644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.791658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.791667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.791682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.791692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.791706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.616 [2024-07-25 10:41:16.791721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.791737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.791746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:15.616 [2024-07-25 10:41:16.792736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.616 [2024-07-25 10:41:16.792746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:15.616 Received shutdown signal, test time was about 26.956847 seconds 00:26:15.616 00:26:15.616 Latency(us) 00:26:15.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.616 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:15.616 Verification LBA range: start 0x0 length 0x4000 00:26:15.616 Nvme0n1 : 26.96 11204.72 43.77 0.00 0.00 11402.56 471.86 3073585.97 00:26:15.616 =================================================================================================================== 00:26:15.616 Total : 11204.72 43.77 0.00 0.00 11402.56 471.86 3073585.97 00:26:15.616 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:15.875 rmmod nvme_tcp 00:26:15.875 rmmod nvme_fabrics 00:26:15.875 rmmod nvme_keyring 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 4003509 ']' 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 4003509 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 4003509 ']' 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 4003509 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4003509 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4003509' 00:26:15.875 killing process with pid 4003509 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 4003509 00:26:15.875 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 4003509 00:26:16.133 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:16.133 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:16.133 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:16.133 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:16.133 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:16.133 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.133 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:16.133 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.037 10:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:18.037 00:26:18.037 real 0m40.595s 00:26:18.037 user 1m43.245s 00:26:18.037 sys 0m14.620s 00:26:18.037 10:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:18.037 10:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:18.037 ************************************ 00:26:18.037 END TEST nvmf_host_multipath_status 00:26:18.037 ************************************ 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.296 ************************************ 00:26:18.296 START TEST nvmf_discovery_remove_ifc 00:26:18.296 ************************************ 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:18.296 * Looking for test storage... 00:26:18.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.296 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:18.297 10:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:24.867 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:24.867 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:24.867 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:24.868 Found net devices under 0000:af:00.0: cvl_0_0 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:24.868 Found net devices under 0000:af:00.1: cvl_0_1 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:24.868 10:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:24.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:26:24.868 00:26:24.868 --- 10.0.0.2 ping statistics --- 00:26:24.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.868 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:24.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:26:24.868 00:26:24.868 --- 10.0.0.1 ping statistics --- 00:26:24.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.868 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=4012444 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 4012444 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 4012444 ']' 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:24.868 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:24.868 [2024-07-25 10:41:28.114126] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:26:24.868 [2024-07-25 10:41:28.114179] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.868 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.868 [2024-07-25 10:41:28.188114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.868 [2024-07-25 10:41:28.259965] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.868 [2024-07-25 10:41:28.260002] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.868 [2024-07-25 10:41:28.260012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.868 [2024-07-25 10:41:28.260020] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.868 [2024-07-25 10:41:28.260027] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.868 [2024-07-25 10:41:28.260047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.437 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:25.437 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:25.437 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:25.437 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:25.437 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.437 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.437 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:25.437 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.437 10:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.437 [2024-07-25 10:41:28.957323] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.437 [2024-07-25 10:41:28.965450] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:25.437 null0 00:26:25.437 [2024-07-25 10:41:28.997475] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.437 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.437 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4012704 00:26:25.437 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4012704 /tmp/host.sock 00:26:25.437 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:25.437 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 4012704 ']' 00:26:25.437 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:25.437 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:25.437 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:25.437 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:25.437 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:25.437 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.437 [2024-07-25 10:41:29.051840] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:26:25.437 [2024-07-25 10:41:29.051886] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012704 ] 00:26:25.437 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.437 [2024-07-25 10:41:29.120097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.695 [2024-07-25 10:41:29.189994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.262 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:26.262 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:26.262 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:26.262 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:26.262 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.262 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.263 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.263 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:26.263 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.263 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.263 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.263 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:26.263 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.263 10:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.641 [2024-07-25 10:41:30.986894] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:27.641 [2024-07-25 10:41:30.986921] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:27.641 [2024-07-25 10:41:30.986937] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:27.641 [2024-07-25 10:41:31.073189] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:27.641 [2024-07-25 10:41:31.178518] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:27.641 [2024-07-25 10:41:31.178563] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:27.641 [2024-07-25 10:41:31.178584] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:27.641 [2024-07-25 10:41:31.178599] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:27.641 [2024-07-25 10:41:31.178619] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:27.641 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.641 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:27.641 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:27.641 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.641 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:27.641 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.641 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.641 [2024-07-25 10:41:31.185093] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xa3ed40 was disconnected and freed. delete nvme_qpair. 00:26:27.641 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:27.641 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:27.641 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.641 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:27.641 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:27.641 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:27.900 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:27.900 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:27.900 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.900 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:27.900 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:27.900 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.900 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.900 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:27.900 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.900 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:27.900 10:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:28.836 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:28.836 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.836 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:28.836 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.836 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.836 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:28.836 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:28.836 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.836 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:28.836 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:29.780 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.041 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.041 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.041 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.041 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.041 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.041 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.041 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.041 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:30.041 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:30.978 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.978 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.978 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.978 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.978 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.978 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.978 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.978 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.978 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:30.978 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:31.914 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.914 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.914 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.914 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.914 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.914 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.914 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.172 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.172 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:32.172 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:33.109 [2024-07-25 10:41:36.619565] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:33.109 [2024-07-25 10:41:36.619609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.109 [2024-07-25 10:41:36.619622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.109 [2024-07-25 10:41:36.619632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.109 [2024-07-25 10:41:36.619642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.109 [2024-07-25 10:41:36.619651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.109 [2024-07-25 10:41:36.619660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.109 [2024-07-25 10:41:36.619669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.109 [2024-07-25 10:41:36.619678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.109 [2024-07-25 10:41:36.619687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.109 [2024-07-25 10:41:36.619697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.109 [2024-07-25 10:41:36.619706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05740 is same with the state(5) to be set 00:26:33.109 [2024-07-25 10:41:36.629586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa05740 (9): Bad file descriptor 00:26:33.109 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.109 [2024-07-25 10:41:36.639623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:33.109 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.109 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.109 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.109 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.109 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.109 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.045 [2024-07-25 10:41:37.687745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:34.045 [2024-07-25 10:41:37.687790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa05740 with addr=10.0.0.2, port=4420 00:26:34.045 [2024-07-25 10:41:37.687807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05740 is same with the state(5) to be set 00:26:34.045 [2024-07-25 10:41:37.687837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa05740 (9): Bad file descriptor 00:26:34.045 [2024-07-25 10:41:37.688221] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.045 [2024-07-25 10:41:37.688252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:34.045 [2024-07-25 10:41:37.688265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:34.045 [2024-07-25 10:41:37.688279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:34.045 [2024-07-25 10:41:37.688299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.045 [2024-07-25 10:41:37.688317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:34.045 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.045 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:34.045 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:35.422 [2024-07-25 10:41:38.690785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:35.422 [2024-07-25 10:41:38.690807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:35.422 [2024-07-25 10:41:38.690818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:35.422 [2024-07-25 10:41:38.690827] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:35.422 [2024-07-25 10:41:38.690840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.422 [2024-07-25 10:41:38.690857] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:35.422 [2024-07-25 10:41:38.690877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.422 [2024-07-25 10:41:38.690888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.422 [2024-07-25 10:41:38.690900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.422 [2024-07-25 10:41:38.690910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.422 [2024-07-25 10:41:38.690919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.422 [2024-07-25 10:41:38.690929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.422 [2024-07-25 10:41:38.690939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.422 [2024-07-25 10:41:38.690949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.422 [2024-07-25 10:41:38.690958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.422 [2024-07-25 10:41:38.690968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.422 [2024-07-25 10:41:38.690977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:35.422 [2024-07-25 10:41:38.691051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa04ba0 (9): Bad file descriptor 00:26:35.422 [2024-07-25 10:41:38.692063] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:35.422 [2024-07-25 10:41:38.692076] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:35.422 10:41:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:36.359 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:36.359 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.359 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:36.359 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:36.359 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.359 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.359 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:36.359 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.359 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:36.359 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:37.296 [2024-07-25 10:41:40.741364] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:37.296 [2024-07-25 10:41:40.741387] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:37.296 [2024-07-25 10:41:40.741400] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:37.296 [2024-07-25 10:41:40.869782] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:37.296 [2024-07-25 10:41:40.971254] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:37.296 [2024-07-25 10:41:40.971287] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:37.296 [2024-07-25 10:41:40.971305] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:37.296 [2024-07-25 10:41:40.971319] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:37.296 [2024-07-25 10:41:40.971327] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:37.296 [2024-07-25 10:41:40.980012] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x9f4110 was disconnected and freed. delete nvme_qpair. 00:26:37.296 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.296 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.296 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.296 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.296 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.296 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.296 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.556 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.556 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:37.556 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:37.556 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4012704 00:26:37.556 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 4012704 ']' 00:26:37.556 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 4012704 00:26:37.556 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:37.556 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:37.556 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4012704 00:26:37.556 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:37.556 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:37.556 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4012704' 00:26:37.556 killing process with pid 4012704 00:26:37.556 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 4012704 00:26:37.556 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 4012704 00:26:37.815 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:37.815 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:37.815 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:37.815 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:37.815 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:37.815 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:37.815 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:37.815 rmmod nvme_tcp 00:26:37.815 rmmod nvme_fabrics 00:26:37.815 rmmod nvme_keyring 00:26:37.815 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:37.815 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:37.815 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:37.815 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 4012444 ']' 00:26:37.815 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 4012444 00:26:37.815 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 4012444 ']' 00:26:37.815 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 4012444 00:26:37.816 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:37.816 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:37.816 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4012444 00:26:37.816 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:37.816 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:37.816 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4012444' 00:26:37.816 killing process with pid 4012444 00:26:37.816 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 4012444 00:26:37.816 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 4012444 00:26:38.075 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:38.075 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:38.075 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:38.075 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.075 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:38.075 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.075 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.075 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.980 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:39.980 00:26:39.980 real 0m21.834s 00:26:39.980 user 0m25.957s 00:26:39.980 sys 0m6.772s 00:26:39.980 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:39.980 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.981 ************************************ 00:26:39.981 END TEST nvmf_discovery_remove_ifc 00:26:39.981 ************************************ 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.238 ************************************ 00:26:40.238 START TEST nvmf_identify_kernel_target 00:26:40.238 ************************************ 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:40.238 * Looking for test storage... 00:26:40.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.238 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:40.239 10:41:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.839 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:46.840 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:46.840 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:46.840 Found net devices under 0000:af:00.0: cvl_0_0 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:46.840 Found net devices under 0000:af:00.1: cvl_0_1 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:46.840 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:47.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:26:47.100 00:26:47.100 --- 10.0.0.2 ping statistics --- 00:26:47.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.100 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:47.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:26:47.100 00:26:47.100 --- 10.0.0.1 ping statistics --- 00:26:47.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.100 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:47.100 10:41:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:50.391 Waiting for block devices as requested 00:26:50.391 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:50.391 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:50.391 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:50.391 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:50.391 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:50.391 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:50.651 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:50.651 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:50.651 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:50.910 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:50.910 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:50.910 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:51.169 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:51.169 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:51.169 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:51.427 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:51.427 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:26:51.427 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:51.427 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:51.427 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:51.427 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:51.427 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:51.428 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:51.428 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:51.428 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:51.428 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:51.687 No valid GPT data, bailing 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:26:51.687 00:26:51.687 Discovery Log Number of Records 2, Generation counter 2 00:26:51.687 =====Discovery Log Entry 0====== 00:26:51.687 trtype: tcp 00:26:51.687 adrfam: ipv4 00:26:51.687 subtype: current discovery subsystem 00:26:51.687 treq: not specified, sq flow control disable supported 00:26:51.687 portid: 1 00:26:51.687 trsvcid: 4420 00:26:51.687 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:51.687 traddr: 10.0.0.1 00:26:51.687 eflags: none 00:26:51.687 sectype: none 00:26:51.687 =====Discovery Log Entry 1====== 00:26:51.687 trtype: tcp 00:26:51.687 adrfam: ipv4 00:26:51.687 subtype: nvme subsystem 00:26:51.687 treq: not specified, sq flow control disable supported 00:26:51.687 portid: 1 00:26:51.687 trsvcid: 4420 00:26:51.687 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:51.687 traddr: 10.0.0.1 00:26:51.687 eflags: none 00:26:51.687 sectype: none 00:26:51.687 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:51.687 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:51.687 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.687 ===================================================== 00:26:51.687 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:51.687 ===================================================== 00:26:51.687 Controller Capabilities/Features 00:26:51.687 ================================ 00:26:51.687 Vendor ID: 0000 00:26:51.687 Subsystem Vendor ID: 0000 00:26:51.687 Serial Number: 5ab9fd97bd22a7036b4c 00:26:51.687 Model Number: Linux 00:26:51.687 Firmware Version: 6.7.0-68 00:26:51.687 Recommended Arb Burst: 0 00:26:51.687 IEEE OUI Identifier: 00 00 00 00:26:51.687 Multi-path I/O 00:26:51.687 May have multiple subsystem ports: No 00:26:51.687 May have multiple controllers: No 00:26:51.687 Associated with SR-IOV VF: No 00:26:51.687 Max Data Transfer Size: Unlimited 00:26:51.687 Max Number of Namespaces: 0 00:26:51.687 Max Number of I/O Queues: 1024 00:26:51.687 NVMe Specification Version (VS): 1.3 00:26:51.687 NVMe Specification Version (Identify): 1.3 00:26:51.687 Maximum Queue Entries: 1024 00:26:51.687 Contiguous Queues Required: No 00:26:51.687 Arbitration Mechanisms Supported 00:26:51.687 Weighted Round Robin: Not Supported 00:26:51.687 Vendor Specific: Not Supported 00:26:51.687 Reset Timeout: 7500 ms 00:26:51.687 Doorbell Stride: 4 bytes 00:26:51.687 NVM Subsystem Reset: Not Supported 00:26:51.687 Command Sets Supported 00:26:51.687 NVM Command Set: Supported 00:26:51.687 Boot Partition: Not Supported 00:26:51.687 Memory Page Size Minimum: 4096 bytes 00:26:51.687 Memory Page Size Maximum: 4096 bytes 00:26:51.687 Persistent Memory Region: Not Supported 00:26:51.687 Optional Asynchronous Events Supported 00:26:51.687 Namespace Attribute Notices: Not Supported 00:26:51.687 Firmware Activation Notices: Not Supported 00:26:51.687 ANA Change Notices: Not Supported 00:26:51.687 PLE Aggregate Log Change Notices: Not Supported 00:26:51.687 LBA Status Info Alert Notices: Not Supported 00:26:51.687 EGE Aggregate Log Change Notices: Not Supported 00:26:51.687 Normal NVM Subsystem Shutdown event: Not Supported 00:26:51.687 Zone Descriptor Change Notices: Not Supported 00:26:51.687 Discovery Log Change Notices: Supported 00:26:51.687 Controller Attributes 00:26:51.687 128-bit Host Identifier: Not Supported 00:26:51.687 Non-Operational Permissive Mode: Not Supported 00:26:51.687 NVM Sets: Not Supported 00:26:51.687 Read Recovery Levels: Not Supported 00:26:51.687 Endurance Groups: Not Supported 00:26:51.687 Predictable Latency Mode: Not Supported 00:26:51.687 Traffic Based Keep ALive: Not Supported 00:26:51.687 Namespace Granularity: Not Supported 00:26:51.687 SQ Associations: Not Supported 00:26:51.687 UUID List: Not Supported 00:26:51.687 Multi-Domain Subsystem: Not Supported 00:26:51.687 Fixed Capacity Management: Not Supported 00:26:51.687 Variable Capacity Management: Not Supported 00:26:51.687 Delete Endurance Group: Not Supported 00:26:51.687 Delete NVM Set: Not Supported 00:26:51.687 Extended LBA Formats Supported: Not Supported 00:26:51.687 Flexible Data Placement Supported: Not Supported 00:26:51.687 00:26:51.687 Controller Memory Buffer Support 00:26:51.687 ================================ 00:26:51.687 Supported: No 00:26:51.687 00:26:51.688 Persistent Memory Region Support 00:26:51.688 ================================ 00:26:51.688 Supported: No 00:26:51.688 00:26:51.688 Admin Command Set Attributes 00:26:51.688 ============================ 00:26:51.688 Security Send/Receive: Not Supported 00:26:51.688 Format NVM: Not Supported 00:26:51.688 Firmware Activate/Download: Not Supported 00:26:51.688 Namespace Management: Not Supported 00:26:51.688 Device Self-Test: Not Supported 00:26:51.688 Directives: Not Supported 00:26:51.688 NVMe-MI: Not Supported 00:26:51.688 Virtualization Management: Not Supported 00:26:51.688 Doorbell Buffer Config: Not Supported 00:26:51.688 Get LBA Status Capability: Not Supported 00:26:51.688 Command & Feature Lockdown Capability: Not Supported 00:26:51.688 Abort Command Limit: 1 00:26:51.688 Async Event Request Limit: 1 00:26:51.688 Number of Firmware Slots: N/A 00:26:51.688 Firmware Slot 1 Read-Only: N/A 00:26:51.688 Firmware Activation Without Reset: N/A 00:26:51.688 Multiple Update Detection Support: N/A 00:26:51.688 Firmware Update Granularity: No Information Provided 00:26:51.688 Per-Namespace SMART Log: No 00:26:51.688 Asymmetric Namespace Access Log Page: Not Supported 00:26:51.688 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:51.688 Command Effects Log Page: Not Supported 00:26:51.688 Get Log Page Extended Data: Supported 00:26:51.688 Telemetry Log Pages: Not Supported 00:26:51.688 Persistent Event Log Pages: Not Supported 00:26:51.688 Supported Log Pages Log Page: May Support 00:26:51.688 Commands Supported & Effects Log Page: Not Supported 00:26:51.688 Feature Identifiers & Effects Log Page:May Support 00:26:51.688 NVMe-MI Commands & Effects Log Page: May Support 00:26:51.688 Data Area 4 for Telemetry Log: Not Supported 00:26:51.688 Error Log Page Entries Supported: 1 00:26:51.688 Keep Alive: Not Supported 00:26:51.688 00:26:51.688 NVM Command Set Attributes 00:26:51.688 ========================== 00:26:51.688 Submission Queue Entry Size 00:26:51.688 Max: 1 00:26:51.688 Min: 1 00:26:51.688 Completion Queue Entry Size 00:26:51.688 Max: 1 00:26:51.688 Min: 1 00:26:51.688 Number of Namespaces: 0 00:26:51.688 Compare Command: Not Supported 00:26:51.688 Write Uncorrectable Command: Not Supported 00:26:51.688 Dataset Management Command: Not Supported 00:26:51.688 Write Zeroes Command: Not Supported 00:26:51.688 Set Features Save Field: Not Supported 00:26:51.688 Reservations: Not Supported 00:26:51.688 Timestamp: Not Supported 00:26:51.688 Copy: Not Supported 00:26:51.688 Volatile Write Cache: Not Present 00:26:51.688 Atomic Write Unit (Normal): 1 00:26:51.688 Atomic Write Unit (PFail): 1 00:26:51.688 Atomic Compare & Write Unit: 1 00:26:51.688 Fused Compare & Write: Not Supported 00:26:51.688 Scatter-Gather List 00:26:51.688 SGL Command Set: Supported 00:26:51.688 SGL Keyed: Not Supported 00:26:51.688 SGL Bit Bucket Descriptor: Not Supported 00:26:51.688 SGL Metadata Pointer: Not Supported 00:26:51.688 Oversized SGL: Not Supported 00:26:51.688 SGL Metadata Address: Not Supported 00:26:51.688 SGL Offset: Supported 00:26:51.688 Transport SGL Data Block: Not Supported 00:26:51.688 Replay Protected Memory Block: Not Supported 00:26:51.688 00:26:51.688 Firmware Slot Information 00:26:51.688 ========================= 00:26:51.688 Active slot: 0 00:26:51.688 00:26:51.688 00:26:51.688 Error Log 00:26:51.688 ========= 00:26:51.688 00:26:51.688 Active Namespaces 00:26:51.688 ================= 00:26:51.688 Discovery Log Page 00:26:51.688 ================== 00:26:51.688 Generation Counter: 2 00:26:51.688 Number of Records: 2 00:26:51.688 Record Format: 0 00:26:51.688 00:26:51.688 Discovery Log Entry 0 00:26:51.688 ---------------------- 00:26:51.688 Transport Type: 3 (TCP) 00:26:51.688 Address Family: 1 (IPv4) 00:26:51.688 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:51.688 Entry Flags: 00:26:51.688 Duplicate Returned Information: 0 00:26:51.688 Explicit Persistent Connection Support for Discovery: 0 00:26:51.688 Transport Requirements: 00:26:51.688 Secure Channel: Not Specified 00:26:51.688 Port ID: 1 (0x0001) 00:26:51.688 Controller ID: 65535 (0xffff) 00:26:51.688 Admin Max SQ Size: 32 00:26:51.688 Transport Service Identifier: 4420 00:26:51.688 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:51.688 Transport Address: 10.0.0.1 00:26:51.688 Discovery Log Entry 1 00:26:51.688 ---------------------- 00:26:51.688 Transport Type: 3 (TCP) 00:26:51.688 Address Family: 1 (IPv4) 00:26:51.688 Subsystem Type: 2 (NVM Subsystem) 00:26:51.688 Entry Flags: 00:26:51.688 Duplicate Returned Information: 0 00:26:51.688 Explicit Persistent Connection Support for Discovery: 0 00:26:51.688 Transport Requirements: 00:26:51.688 Secure Channel: Not Specified 00:26:51.688 Port ID: 1 (0x0001) 00:26:51.688 Controller ID: 65535 (0xffff) 00:26:51.688 Admin Max SQ Size: 32 00:26:51.688 Transport Service Identifier: 4420 00:26:51.688 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:51.688 Transport Address: 10.0.0.1 00:26:51.688 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:51.949 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.949 get_feature(0x01) failed 00:26:51.949 get_feature(0x02) failed 00:26:51.949 get_feature(0x04) failed 00:26:51.949 ===================================================== 00:26:51.949 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:51.949 ===================================================== 00:26:51.949 Controller Capabilities/Features 00:26:51.949 ================================ 00:26:51.949 Vendor ID: 0000 00:26:51.949 Subsystem Vendor ID: 0000 00:26:51.949 Serial Number: a40b211f87c450093ff6 00:26:51.949 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:51.949 Firmware Version: 6.7.0-68 00:26:51.949 Recommended Arb Burst: 6 00:26:51.949 IEEE OUI Identifier: 00 00 00 00:26:51.949 Multi-path I/O 00:26:51.949 May have multiple subsystem ports: Yes 00:26:51.949 May have multiple controllers: Yes 00:26:51.949 Associated with SR-IOV VF: No 00:26:51.949 Max Data Transfer Size: Unlimited 00:26:51.949 Max Number of Namespaces: 1024 00:26:51.949 Max Number of I/O Queues: 128 00:26:51.949 NVMe Specification Version (VS): 1.3 00:26:51.949 NVMe Specification Version (Identify): 1.3 00:26:51.949 Maximum Queue Entries: 1024 00:26:51.949 Contiguous Queues Required: No 00:26:51.949 Arbitration Mechanisms Supported 00:26:51.949 Weighted Round Robin: Not Supported 00:26:51.949 Vendor Specific: Not Supported 00:26:51.949 Reset Timeout: 7500 ms 00:26:51.949 Doorbell Stride: 4 bytes 00:26:51.949 NVM Subsystem Reset: Not Supported 00:26:51.949 Command Sets Supported 00:26:51.949 NVM Command Set: Supported 00:26:51.949 Boot Partition: Not Supported 00:26:51.949 Memory Page Size Minimum: 4096 bytes 00:26:51.949 Memory Page Size Maximum: 4096 bytes 00:26:51.949 Persistent Memory Region: Not Supported 00:26:51.949 Optional Asynchronous Events Supported 00:26:51.949 Namespace Attribute Notices: Supported 00:26:51.949 Firmware Activation Notices: Not Supported 00:26:51.949 ANA Change Notices: Supported 00:26:51.949 PLE Aggregate Log Change Notices: Not Supported 00:26:51.949 LBA Status Info Alert Notices: Not Supported 00:26:51.949 EGE Aggregate Log Change Notices: Not Supported 00:26:51.949 Normal NVM Subsystem Shutdown event: Not Supported 00:26:51.949 Zone Descriptor Change Notices: Not Supported 00:26:51.949 Discovery Log Change Notices: Not Supported 00:26:51.949 Controller Attributes 00:26:51.949 128-bit Host Identifier: Supported 00:26:51.949 Non-Operational Permissive Mode: Not Supported 00:26:51.949 NVM Sets: Not Supported 00:26:51.949 Read Recovery Levels: Not Supported 00:26:51.949 Endurance Groups: Not Supported 00:26:51.949 Predictable Latency Mode: Not Supported 00:26:51.949 Traffic Based Keep ALive: Supported 00:26:51.949 Namespace Granularity: Not Supported 00:26:51.949 SQ Associations: Not Supported 00:26:51.949 UUID List: Not Supported 00:26:51.949 Multi-Domain Subsystem: Not Supported 00:26:51.949 Fixed Capacity Management: Not Supported 00:26:51.949 Variable Capacity Management: Not Supported 00:26:51.949 Delete Endurance Group: Not Supported 00:26:51.949 Delete NVM Set: Not Supported 00:26:51.949 Extended LBA Formats Supported: Not Supported 00:26:51.949 Flexible Data Placement Supported: Not Supported 00:26:51.949 00:26:51.949 Controller Memory Buffer Support 00:26:51.949 ================================ 00:26:51.949 Supported: No 00:26:51.949 00:26:51.949 Persistent Memory Region Support 00:26:51.949 ================================ 00:26:51.949 Supported: No 00:26:51.949 00:26:51.949 Admin Command Set Attributes 00:26:51.949 ============================ 00:26:51.949 Security Send/Receive: Not Supported 00:26:51.949 Format NVM: Not Supported 00:26:51.949 Firmware Activate/Download: Not Supported 00:26:51.949 Namespace Management: Not Supported 00:26:51.949 Device Self-Test: Not Supported 00:26:51.949 Directives: Not Supported 00:26:51.949 NVMe-MI: Not Supported 00:26:51.949 Virtualization Management: Not Supported 00:26:51.949 Doorbell Buffer Config: Not Supported 00:26:51.949 Get LBA Status Capability: Not Supported 00:26:51.949 Command & Feature Lockdown Capability: Not Supported 00:26:51.949 Abort Command Limit: 4 00:26:51.949 Async Event Request Limit: 4 00:26:51.949 Number of Firmware Slots: N/A 00:26:51.949 Firmware Slot 1 Read-Only: N/A 00:26:51.949 Firmware Activation Without Reset: N/A 00:26:51.949 Multiple Update Detection Support: N/A 00:26:51.949 Firmware Update Granularity: No Information Provided 00:26:51.949 Per-Namespace SMART Log: Yes 00:26:51.949 Asymmetric Namespace Access Log Page: Supported 00:26:51.949 ANA Transition Time : 10 sec 00:26:51.949 00:26:51.949 Asymmetric Namespace Access Capabilities 00:26:51.949 ANA Optimized State : Supported 00:26:51.949 ANA Non-Optimized State : Supported 00:26:51.949 ANA Inaccessible State : Supported 00:26:51.949 ANA Persistent Loss State : Supported 00:26:51.949 ANA Change State : Supported 00:26:51.949 ANAGRPID is not changed : No 00:26:51.949 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:51.949 00:26:51.949 ANA Group Identifier Maximum : 128 00:26:51.949 Number of ANA Group Identifiers : 128 00:26:51.949 Max Number of Allowed Namespaces : 1024 00:26:51.949 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:51.949 Command Effects Log Page: Supported 00:26:51.949 Get Log Page Extended Data: Supported 00:26:51.949 Telemetry Log Pages: Not Supported 00:26:51.949 Persistent Event Log Pages: Not Supported 00:26:51.949 Supported Log Pages Log Page: May Support 00:26:51.949 Commands Supported & Effects Log Page: Not Supported 00:26:51.949 Feature Identifiers & Effects Log Page:May Support 00:26:51.949 NVMe-MI Commands & Effects Log Page: May Support 00:26:51.949 Data Area 4 for Telemetry Log: Not Supported 00:26:51.949 Error Log Page Entries Supported: 128 00:26:51.949 Keep Alive: Supported 00:26:51.949 Keep Alive Granularity: 1000 ms 00:26:51.949 00:26:51.949 NVM Command Set Attributes 00:26:51.949 ========================== 00:26:51.949 Submission Queue Entry Size 00:26:51.949 Max: 64 00:26:51.949 Min: 64 00:26:51.949 Completion Queue Entry Size 00:26:51.949 Max: 16 00:26:51.949 Min: 16 00:26:51.949 Number of Namespaces: 1024 00:26:51.949 Compare Command: Not Supported 00:26:51.949 Write Uncorrectable Command: Not Supported 00:26:51.949 Dataset Management Command: Supported 00:26:51.949 Write Zeroes Command: Supported 00:26:51.949 Set Features Save Field: Not Supported 00:26:51.949 Reservations: Not Supported 00:26:51.949 Timestamp: Not Supported 00:26:51.949 Copy: Not Supported 00:26:51.949 Volatile Write Cache: Present 00:26:51.949 Atomic Write Unit (Normal): 1 00:26:51.949 Atomic Write Unit (PFail): 1 00:26:51.949 Atomic Compare & Write Unit: 1 00:26:51.949 Fused Compare & Write: Not Supported 00:26:51.949 Scatter-Gather List 00:26:51.949 SGL Command Set: Supported 00:26:51.949 SGL Keyed: Not Supported 00:26:51.949 SGL Bit Bucket Descriptor: Not Supported 00:26:51.949 SGL Metadata Pointer: Not Supported 00:26:51.949 Oversized SGL: Not Supported 00:26:51.949 SGL Metadata Address: Not Supported 00:26:51.950 SGL Offset: Supported 00:26:51.950 Transport SGL Data Block: Not Supported 00:26:51.950 Replay Protected Memory Block: Not Supported 00:26:51.950 00:26:51.950 Firmware Slot Information 00:26:51.950 ========================= 00:26:51.950 Active slot: 0 00:26:51.950 00:26:51.950 Asymmetric Namespace Access 00:26:51.950 =========================== 00:26:51.950 Change Count : 0 00:26:51.950 Number of ANA Group Descriptors : 1 00:26:51.950 ANA Group Descriptor : 0 00:26:51.950 ANA Group ID : 1 00:26:51.950 Number of NSID Values : 1 00:26:51.950 Change Count : 0 00:26:51.950 ANA State : 1 00:26:51.950 Namespace Identifier : 1 00:26:51.950 00:26:51.950 Commands Supported and Effects 00:26:51.950 ============================== 00:26:51.950 Admin Commands 00:26:51.950 -------------- 00:26:51.950 Get Log Page (02h): Supported 00:26:51.950 Identify (06h): Supported 00:26:51.950 Abort (08h): Supported 00:26:51.950 Set Features (09h): Supported 00:26:51.950 Get Features (0Ah): Supported 00:26:51.950 Asynchronous Event Request (0Ch): Supported 00:26:51.950 Keep Alive (18h): Supported 00:26:51.950 I/O Commands 00:26:51.950 ------------ 00:26:51.950 Flush (00h): Supported 00:26:51.950 Write (01h): Supported LBA-Change 00:26:51.950 Read (02h): Supported 00:26:51.950 Write Zeroes (08h): Supported LBA-Change 00:26:51.950 Dataset Management (09h): Supported 00:26:51.950 00:26:51.950 Error Log 00:26:51.950 ========= 00:26:51.950 Entry: 0 00:26:51.950 Error Count: 0x3 00:26:51.950 Submission Queue Id: 0x0 00:26:51.950 Command Id: 0x5 00:26:51.950 Phase Bit: 0 00:26:51.950 Status Code: 0x2 00:26:51.950 Status Code Type: 0x0 00:26:51.950 Do Not Retry: 1 00:26:51.950 Error Location: 0x28 00:26:51.950 LBA: 0x0 00:26:51.950 Namespace: 0x0 00:26:51.950 Vendor Log Page: 0x0 00:26:51.950 ----------- 00:26:51.950 Entry: 1 00:26:51.950 Error Count: 0x2 00:26:51.950 Submission Queue Id: 0x0 00:26:51.950 Command Id: 0x5 00:26:51.950 Phase Bit: 0 00:26:51.950 Status Code: 0x2 00:26:51.950 Status Code Type: 0x0 00:26:51.950 Do Not Retry: 1 00:26:51.950 Error Location: 0x28 00:26:51.950 LBA: 0x0 00:26:51.950 Namespace: 0x0 00:26:51.950 Vendor Log Page: 0x0 00:26:51.950 ----------- 00:26:51.950 Entry: 2 00:26:51.950 Error Count: 0x1 00:26:51.950 Submission Queue Id: 0x0 00:26:51.950 Command Id: 0x4 00:26:51.950 Phase Bit: 0 00:26:51.950 Status Code: 0x2 00:26:51.950 Status Code Type: 0x0 00:26:51.950 Do Not Retry: 1 00:26:51.950 Error Location: 0x28 00:26:51.950 LBA: 0x0 00:26:51.950 Namespace: 0x0 00:26:51.950 Vendor Log Page: 0x0 00:26:51.950 00:26:51.950 Number of Queues 00:26:51.950 ================ 00:26:51.950 Number of I/O Submission Queues: 128 00:26:51.950 Number of I/O Completion Queues: 128 00:26:51.950 00:26:51.950 ZNS Specific Controller Data 00:26:51.950 ============================ 00:26:51.950 Zone Append Size Limit: 0 00:26:51.950 00:26:51.950 00:26:51.950 Active Namespaces 00:26:51.950 ================= 00:26:51.950 get_feature(0x05) failed 00:26:51.950 Namespace ID:1 00:26:51.950 Command Set Identifier: NVM (00h) 00:26:51.950 Deallocate: Supported 00:26:51.950 Deallocated/Unwritten Error: Not Supported 00:26:51.950 Deallocated Read Value: Unknown 00:26:51.950 Deallocate in Write Zeroes: Not Supported 00:26:51.950 Deallocated Guard Field: 0xFFFF 00:26:51.950 Flush: Supported 00:26:51.950 Reservation: Not Supported 00:26:51.950 Namespace Sharing Capabilities: Multiple Controllers 00:26:51.950 Size (in LBAs): 3125627568 (1490GiB) 00:26:51.950 Capacity (in LBAs): 3125627568 (1490GiB) 00:26:51.950 Utilization (in LBAs): 3125627568 (1490GiB) 00:26:51.950 UUID: 16ff26a3-e556-4fc1-bac4-6e509d497553 00:26:51.950 Thin Provisioning: Not Supported 00:26:51.950 Per-NS Atomic Units: Yes 00:26:51.950 Atomic Boundary Size (Normal): 0 00:26:51.950 Atomic Boundary Size (PFail): 0 00:26:51.950 Atomic Boundary Offset: 0 00:26:51.950 NGUID/EUI64 Never Reused: No 00:26:51.950 ANA group ID: 1 00:26:51.950 Namespace Write Protected: No 00:26:51.950 Number of LBA Formats: 1 00:26:51.950 Current LBA Format: LBA Format #00 00:26:51.950 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:51.950 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:51.950 rmmod nvme_tcp 00:26:51.950 rmmod nvme_fabrics 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.950 10:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.857 10:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:54.116 10:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:54.116 10:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:54.116 10:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:54.116 10:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:54.116 10:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:54.116 10:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:54.116 10:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:54.116 10:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:54.116 10:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:54.116 10:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:57.406 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:57.406 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:57.406 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:57.406 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:57.406 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:57.406 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:57.406 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:57.406 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:57.406 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:57.406 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:57.406 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:57.406 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:57.406 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:57.406 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:57.406 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:57.406 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:59.311 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:26:59.311 00:26:59.311 real 0m18.876s 00:26:59.311 user 0m4.413s 00:26:59.311 sys 0m10.078s 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:59.311 ************************************ 00:26:59.311 END TEST nvmf_identify_kernel_target 00:26:59.311 ************************************ 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.311 ************************************ 00:26:59.311 START TEST nvmf_auth_host 00:26:59.311 ************************************ 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:59.311 * Looking for test storage... 00:26:59.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.311 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:59.312 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:59.312 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:59.312 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.880 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.880 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:05.880 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:05.880 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:05.880 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:05.880 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:05.880 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:05.880 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:05.881 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:05.881 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:05.881 Found net devices under 0000:af:00.0: cvl_0_0 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:05.881 Found net devices under 0000:af:00.1: cvl_0_1 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:05.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:27:05.881 00:27:05.881 --- 10.0.0.2 ping statistics --- 00:27:05.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.881 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:27:05.881 00:27:05.881 --- 10.0.0.1 ping statistics --- 00:27:05.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.881 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=4025652 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 4025652 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 4025652 ']' 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:05.881 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c606da90d2a389c3ac240c0e87e83e07 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.8hS 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c606da90d2a389c3ac240c0e87e83e07 0 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c606da90d2a389c3ac240c0e87e83e07 0 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c606da90d2a389c3ac240c0e87e83e07 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.8hS 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.8hS 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.8hS 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0178ab71a03e86ab227a8fed9cdebf6538b25357a4eb880220bebdc39a97df8f 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.VCj 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0178ab71a03e86ab227a8fed9cdebf6538b25357a4eb880220bebdc39a97df8f 3 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0178ab71a03e86ab227a8fed9cdebf6538b25357a4eb880220bebdc39a97df8f 3 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0178ab71a03e86ab227a8fed9cdebf6538b25357a4eb880220bebdc39a97df8f 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.VCj 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.VCj 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.VCj 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8186e6deb572218951347425a2196d9f7e4b98853fe5e5e2 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.bbu 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8186e6deb572218951347425a2196d9f7e4b98853fe5e5e2 0 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8186e6deb572218951347425a2196d9f7e4b98853fe5e5e2 0 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:06.871 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8186e6deb572218951347425a2196d9f7e4b98853fe5e5e2 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.bbu 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.bbu 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.bbu 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7f19ec1717c48b28860f39885f8837fb71dba4ddcdecabee 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IAV 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7f19ec1717c48b28860f39885f8837fb71dba4ddcdecabee 2 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7f19ec1717c48b28860f39885f8837fb71dba4ddcdecabee 2 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7f19ec1717c48b28860f39885f8837fb71dba4ddcdecabee 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IAV 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IAV 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.IAV 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b07e7928800551a572d722877fdcfa02 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.YYR 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b07e7928800551a572d722877fdcfa02 1 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b07e7928800551a572d722877fdcfa02 1 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b07e7928800551a572d722877fdcfa02 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:06.872 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.YYR 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.YYR 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.YYR 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2c590ac72d13d498d11c97f1210d9c43 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.tQW 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2c590ac72d13d498d11c97f1210d9c43 1 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2c590ac72d13d498d11c97f1210d9c43 1 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2c590ac72d13d498d11c97f1210d9c43 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.tQW 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.tQW 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.tQW 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0a5da53c0f4c66c3c096c9fa341f3dcc1642aa468c58f0de 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.x9Z 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0a5da53c0f4c66c3c096c9fa341f3dcc1642aa468c58f0de 2 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0a5da53c0f4c66c3c096c9fa341f3dcc1642aa468c58f0de 2 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0a5da53c0f4c66c3c096c9fa341f3dcc1642aa468c58f0de 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.x9Z 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.x9Z 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.x9Z 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=84e379269afda7012391eb4628381df9 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.hkR 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 84e379269afda7012391eb4628381df9 0 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 84e379269afda7012391eb4628381df9 0 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=84e379269afda7012391eb4628381df9 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.hkR 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.hkR 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.hkR 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eb494e711edd18b7808fddbcbee7f519b55a199cdf33f10b0fb875339f6bc2ad 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.DsW 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eb494e711edd18b7808fddbcbee7f519b55a199cdf33f10b0fb875339f6bc2ad 3 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eb494e711edd18b7808fddbcbee7f519b55a199cdf33f10b0fb875339f6bc2ad 3 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eb494e711edd18b7808fddbcbee7f519b55a199cdf33f10b0fb875339f6bc2ad 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.DsW 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.DsW 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.DsW 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:07.175 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 4025652 00:27:07.176 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 4025652 ']' 00:27:07.176 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.176 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:07.176 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.176 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:07.176 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8hS 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.VCj ]] 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VCj 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.bbu 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.IAV ]] 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IAV 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.YYR 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.tQW ]] 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tQW 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.x9Z 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.hkR ]] 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.hkR 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.DsW 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.436 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:07.437 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:10.723 Waiting for block devices as requested 00:27:10.723 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:10.723 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:10.981 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:10.981 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:10.981 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:11.240 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:11.240 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:11.240 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:11.500 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:11.500 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:11.500 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:11.759 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:11.759 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:11.759 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:12.016 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:12.016 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:12.016 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:12.954 No valid GPT data, bailing 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:27:12.954 00:27:12.954 Discovery Log Number of Records 2, Generation counter 2 00:27:12.954 =====Discovery Log Entry 0====== 00:27:12.954 trtype: tcp 00:27:12.954 adrfam: ipv4 00:27:12.954 subtype: current discovery subsystem 00:27:12.954 treq: not specified, sq flow control disable supported 00:27:12.954 portid: 1 00:27:12.954 trsvcid: 4420 00:27:12.954 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:12.954 traddr: 10.0.0.1 00:27:12.954 eflags: none 00:27:12.954 sectype: none 00:27:12.954 =====Discovery Log Entry 1====== 00:27:12.954 trtype: tcp 00:27:12.954 adrfam: ipv4 00:27:12.954 subtype: nvme subsystem 00:27:12.954 treq: not specified, sq flow control disable supported 00:27:12.954 portid: 1 00:27:12.954 trsvcid: 4420 00:27:12.954 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:12.954 traddr: 10.0.0.1 00:27:12.954 eflags: none 00:27:12.954 sectype: none 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:12.954 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.955 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.213 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.213 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.213 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.213 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.213 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.213 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.213 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.213 nvme0n1 00:27:13.213 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: ]] 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.214 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.472 nvme0n1 00:27:13.472 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.472 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.472 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.472 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.473 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.731 nvme0n1 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: ]] 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.731 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.989 nvme0n1 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: ]] 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.989 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.990 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.248 nvme0n1 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.248 nvme0n1 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.248 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: ]] 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.507 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.507 nvme0n1 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.507 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.766 nvme0n1 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: ]] 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.766 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.767 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.767 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.767 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.767 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.767 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.767 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.767 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.767 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.767 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.767 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.767 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.026 nvme0n1 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: ]] 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.026 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.285 nvme0n1 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.285 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.544 nvme0n1 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: ]] 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.544 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.803 nvme0n1 00:27:15.803 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.803 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.803 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.803 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.803 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.803 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.062 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.321 nvme0n1 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: ]] 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.321 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.580 nvme0n1 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: ]] 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.580 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.581 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.581 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.581 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.839 nvme0n1 00:27:16.839 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.839 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.839 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.839 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.839 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.839 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.839 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.840 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.097 nvme0n1 00:27:17.097 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.097 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.097 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.097 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.097 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.097 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: ]] 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.356 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.357 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.357 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.615 nvme0n1 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.615 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.181 nvme0n1 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: ]] 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.181 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.440 nvme0n1 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: ]] 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.440 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.699 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.699 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.699 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.699 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.699 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.699 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.699 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.699 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.699 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.699 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.699 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.699 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.699 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.699 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.699 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.958 nvme0n1 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.958 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.526 nvme0n1 00:27:19.526 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.526 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.526 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.526 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.526 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.526 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.526 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.526 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.526 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.526 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: ]] 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.526 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.527 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.527 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.527 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.527 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.527 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.527 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.527 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.527 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.527 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.527 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.527 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.527 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.527 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.527 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.527 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.096 nvme0n1 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.096 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.685 nvme0n1 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: ]] 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.685 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.275 nvme0n1 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: ]] 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.275 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.842 nvme0n1 00:27:21.842 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.842 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.842 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.842 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.842 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.842 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.842 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.842 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.843 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.843 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.101 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.670 nvme0n1 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: ]] 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.670 nvme0n1 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.670 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.930 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.931 nvme0n1 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: ]] 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.931 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.190 nvme0n1 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: ]] 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.190 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.449 nvme0n1 00:27:23.449 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.449 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.449 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.449 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.449 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.449 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.707 nvme0n1 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: ]] 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.707 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.708 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.966 nvme0n1 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.966 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.967 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.967 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.967 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.967 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.967 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.967 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.967 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.967 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.967 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.967 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.967 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.967 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.225 nvme0n1 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.225 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: ]] 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.226 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.485 nvme0n1 00:27:24.485 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.485 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.485 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.485 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.485 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.485 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.485 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.485 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.485 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.485 10:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: ]] 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.485 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.744 nvme0n1 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.744 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.745 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.745 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.004 nvme0n1 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: ]] 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.004 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.263 nvme0n1 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.263 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.264 10:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.523 nvme0n1 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: ]] 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.523 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.782 nvme0n1 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: ]] 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.782 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.041 nvme0n1 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.041 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.301 10:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.560 nvme0n1 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: ]] 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:26.560 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.561 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.819 nvme0n1 00:27:26.819 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.819 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.819 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.819 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.819 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.819 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.078 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.079 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.079 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.079 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.079 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.079 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.338 nvme0n1 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: ]] 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.338 10:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.338 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.338 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.338 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.338 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.338 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.338 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.338 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.338 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.338 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.338 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.338 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.338 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.338 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.338 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.338 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.905 nvme0n1 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: ]] 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.905 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.163 nvme0n1 00:27:28.163 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.163 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.163 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.163 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.163 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.163 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.422 10:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.680 nvme0n1 00:27:28.680 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: ]] 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.681 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.248 nvme0n1 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:29.248 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.249 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.249 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.249 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.249 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.249 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:29.249 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.249 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.507 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.507 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.507 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.508 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.508 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.508 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.508 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.508 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.508 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.508 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.508 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.508 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.508 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.508 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.508 10:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.074 nvme0n1 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: ]] 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.074 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.075 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.075 10:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.641 nvme0n1 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: ]] 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.641 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.642 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.311 nvme0n1 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.311 10:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.880 nvme0n1 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: ]] 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.880 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.881 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.881 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.881 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.140 nvme0n1 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.140 nvme0n1 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.140 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.399 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: ]] 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.400 10:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.400 nvme0n1 00:27:32.400 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.400 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.400 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.400 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.400 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.400 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: ]] 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.659 nvme0n1 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.659 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.660 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.919 nvme0n1 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: ]] 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.919 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.920 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.920 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.920 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.920 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.920 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.920 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.209 nvme0n1 00:27:33.209 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.209 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.209 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.209 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.209 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.209 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.209 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.209 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.209 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.209 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.209 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.209 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.209 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:33.209 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.209 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.210 10:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.469 nvme0n1 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: ]] 00:27:33.469 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.470 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.729 nvme0n1 00:27:33.729 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.729 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.729 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.729 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.729 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: ]] 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.730 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.991 nvme0n1 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.991 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.252 nvme0n1 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: ]] 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.252 10:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.513 nvme0n1 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.513 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.773 nvme0n1 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: ]] 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.773 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.034 nvme0n1 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.034 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: ]] 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.294 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.295 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.295 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.295 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.295 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.295 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.295 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.295 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.295 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:35.295 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.295 10:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.555 nvme0n1 00:27:35.555 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.555 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.555 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.555 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.555 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.555 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.555 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.555 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.555 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.555 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.556 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.815 nvme0n1 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: ]] 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.815 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.384 nvme0n1 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.384 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.385 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.647 nvme0n1 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: ]] 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.647 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.217 nvme0n1 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: ]] 00:27:37.217 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.218 10:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.478 nvme0n1 00:27:37.478 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.478 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.478 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.478 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.478 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.478 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.739 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.998 nvme0n1 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzYwNmRhOTBkMmEzODljM2FjMjQwYzBlODdlODNlMDfIXjbH: 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: ]] 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDE3OGFiNzFhMDNlODZhYjIyN2E4ZmVkOWNkZWJmNjUzOGIyNTM1N2E0ZWI4ODAyMjBiZWJkYzM5YTk3ZGY4ZpAeVPo=: 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.999 10:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.566 nvme0n1 00:27:38.566 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.566 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.566 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.566 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.566 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.566 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.825 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.825 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.825 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.825 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.825 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.825 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.825 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:38.825 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.825 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.826 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.393 nvme0n1 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3ZTc5Mjg4MDA1NTFhNTcyZDcyMjg3N2ZkY2ZhMDL8ztsm: 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: ]] 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM1OTBhYzcyZDEzZDQ5OGQxMWM5N2YxMjEwZDljNDN7/r0r: 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.393 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.394 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.394 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.394 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.394 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.394 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.394 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.394 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.394 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.394 10:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.961 nvme0n1 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGE1ZGE1M2MwZjRjNjZjM2MwOTZjOWZhMzQxZjNkY2MxNjQyYWE0NjhjNThmMGRlbln60w==: 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: ]] 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODRlMzc5MjY5YWZkYTcwMTIzOTFlYjQ2MjgzODFkZjmuzEM5: 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.961 10:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.529 nvme0n1 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI0OTRlNzExZWRkMThiNzgwOGZkZGJjYmVlN2Y1MTliNTVhMTk5Y2RmMzNmMTBiMGZiODc1MzM5ZjZiYzJhZORSKos=: 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.529 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.530 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.096 nvme0n1 00:27:41.096 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.096 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.096 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.096 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.096 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.096 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4NmU2ZGViNTcyMjE4OTUxMzQ3NDI1YTIxOTZkOWY3ZTRiOTg4NTNmZTVlNWUyU2RpbQ==: 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: ]] 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YxOWVjMTcxN2M0OGIyODg2MGYzOTg4NWY4ODM3ZmI3MWRiYTRkZGNkZWNhYmVl0OVRMg==: 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.355 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.356 request: 00:27:41.356 { 00:27:41.356 "name": "nvme0", 00:27:41.356 "trtype": "tcp", 00:27:41.356 "traddr": "10.0.0.1", 00:27:41.356 "adrfam": "ipv4", 00:27:41.356 "trsvcid": "4420", 00:27:41.356 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:41.356 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:41.356 "prchk_reftag": false, 00:27:41.356 "prchk_guard": false, 00:27:41.356 "hdgst": false, 00:27:41.356 "ddgst": false, 00:27:41.356 "method": "bdev_nvme_attach_controller", 00:27:41.356 "req_id": 1 00:27:41.356 } 00:27:41.356 Got JSON-RPC error response 00:27:41.356 response: 00:27:41.356 { 00:27:41.356 "code": -5, 00:27:41.356 "message": "Input/output error" 00:27:41.356 } 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.356 10:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.356 request: 00:27:41.356 { 00:27:41.356 "name": "nvme0", 00:27:41.356 "trtype": "tcp", 00:27:41.356 "traddr": "10.0.0.1", 00:27:41.356 "adrfam": "ipv4", 00:27:41.356 "trsvcid": "4420", 00:27:41.356 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:41.356 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:41.356 "prchk_reftag": false, 00:27:41.356 "prchk_guard": false, 00:27:41.356 "hdgst": false, 00:27:41.356 "ddgst": false, 00:27:41.356 "dhchap_key": "key2", 00:27:41.356 "method": "bdev_nvme_attach_controller", 00:27:41.356 "req_id": 1 00:27:41.356 } 00:27:41.356 Got JSON-RPC error response 00:27:41.356 response: 00:27:41.356 { 00:27:41.356 "code": -5, 00:27:41.356 "message": "Input/output error" 00:27:41.356 } 00:27:41.356 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:41.356 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:41.356 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:41.356 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:41.356 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:41.356 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:41.356 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.356 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.356 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.356 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.616 request: 00:27:41.616 { 00:27:41.616 "name": "nvme0", 00:27:41.616 "trtype": "tcp", 00:27:41.616 "traddr": "10.0.0.1", 00:27:41.616 "adrfam": "ipv4", 00:27:41.616 "trsvcid": "4420", 00:27:41.616 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:41.616 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:41.616 "prchk_reftag": false, 00:27:41.616 "prchk_guard": false, 00:27:41.616 "hdgst": false, 00:27:41.616 "ddgst": false, 00:27:41.616 "dhchap_key": "key1", 00:27:41.616 "dhchap_ctrlr_key": "ckey2", 00:27:41.616 "method": "bdev_nvme_attach_controller", 00:27:41.616 "req_id": 1 00:27:41.616 } 00:27:41.616 Got JSON-RPC error response 00:27:41.616 response: 00:27:41.616 { 00:27:41.616 "code": -5, 00:27:41.616 "message": "Input/output error" 00:27:41.616 } 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.616 rmmod nvme_tcp 00:27:41.616 rmmod nvme_fabrics 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 4025652 ']' 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 4025652 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 4025652 ']' 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 4025652 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4025652 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4025652' 00:27:41.616 killing process with pid 4025652 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 4025652 00:27:41.616 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 4025652 00:27:41.875 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.875 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:41.875 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:41.875 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.875 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.875 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.875 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.875 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.777 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:44.036 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:44.036 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:44.036 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:44.036 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:44.036 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:44.036 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:44.036 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:44.036 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:44.036 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:44.036 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:44.036 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:44.036 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:47.324 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:47.324 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:47.324 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:47.324 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:47.324 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:47.324 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:47.324 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:47.324 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:47.324 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:47.324 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:47.324 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:47.324 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:47.324 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:47.324 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:47.324 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:47.324 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:48.734 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:27:48.992 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.8hS /tmp/spdk.key-null.bbu /tmp/spdk.key-sha256.YYR /tmp/spdk.key-sha384.x9Z /tmp/spdk.key-sha512.DsW /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:48.992 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:51.525 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:51.525 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:51.783 00:27:51.783 real 0m52.563s 00:27:51.783 user 0m45.250s 00:27:51.783 sys 0m14.454s 00:27:51.783 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:51.783 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.783 ************************************ 00:27:51.783 END TEST nvmf_auth_host 00:27:51.783 ************************************ 00:27:51.783 10:42:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:51.783 10:42:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:51.783 10:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:51.783 10:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:51.783 10:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.783 ************************************ 00:27:51.783 START TEST nvmf_digest 00:27:51.783 ************************************ 00:27:51.783 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:51.783 * Looking for test storage... 00:27:51.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:51.783 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:51.783 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:51.783 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:51.783 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:51.783 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:51.783 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:51.783 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:51.784 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:59.903 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:59.903 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:59.903 Found net devices under 0000:af:00.0: cvl_0_0 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:59.903 Found net devices under 0000:af:00.1: cvl_0_1 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.903 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:59.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:27:59.904 00:27:59.904 --- 10.0.0.2 ping statistics --- 00:27:59.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.904 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:27:59.904 00:27:59.904 --- 10.0.0.1 ping statistics --- 00:27:59.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.904 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:59.904 ************************************ 00:27:59.904 START TEST nvmf_digest_clean 00:27:59.904 ************************************ 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=4039432 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 4039432 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 4039432 ']' 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:59.904 10:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:59.904 [2024-07-25 10:43:02.449680] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:27:59.904 [2024-07-25 10:43:02.449733] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.904 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.904 [2024-07-25 10:43:02.524915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.904 [2024-07-25 10:43:02.602424] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.904 [2024-07-25 10:43:02.602462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.904 [2024-07-25 10:43:02.602474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.904 [2024-07-25 10:43:02.602499] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.904 [2024-07-25 10:43:02.602507] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.904 [2024-07-25 10:43:02.602534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:59.904 null0 00:27:59.904 [2024-07-25 10:43:03.372297] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.904 [2024-07-25 10:43:03.396488] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4039571 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4039571 /var/tmp/bperf.sock 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 4039571 ']' 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:59.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.904 10:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:59.904 [2024-07-25 10:43:03.433234] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:27:59.904 [2024-07-25 10:43:03.433277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039571 ] 00:27:59.904 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.904 [2024-07-25 10:43:03.503961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.905 [2024-07-25 10:43:03.578334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.841 10:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.841 10:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:00.841 10:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:00.841 10:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:00.841 10:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:00.841 10:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:00.841 10:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.408 nvme0n1 00:28:01.408 10:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:01.408 10:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:01.408 Running I/O for 2 seconds... 00:28:03.310 00:28:03.311 Latency(us) 00:28:03.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.311 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:03.311 nvme0n1 : 2.00 28510.47 111.37 0.00 0.00 4484.89 2097.15 12582.91 00:28:03.311 =================================================================================================================== 00:28:03.311 Total : 28510.47 111.37 0.00 0.00 4484.89 2097.15 12582.91 00:28:03.311 0 00:28:03.311 10:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:03.311 10:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:03.311 10:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:03.311 10:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:03.311 | select(.opcode=="crc32c") 00:28:03.311 | "\(.module_name) \(.executed)"' 00:28:03.311 10:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:03.570 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:03.570 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:03.570 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:03.570 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:03.570 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4039571 00:28:03.570 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 4039571 ']' 00:28:03.570 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 4039571 00:28:03.570 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:03.570 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:03.570 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4039571 00:28:03.570 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:03.570 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:03.570 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4039571' 00:28:03.570 killing process with pid 4039571 00:28:03.570 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 4039571 00:28:03.570 Received shutdown signal, test time was about 2.000000 seconds 00:28:03.570 00:28:03.570 Latency(us) 00:28:03.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.570 =================================================================================================================== 00:28:03.570 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:03.570 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 4039571 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4040329 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4040329 /var/tmp/bperf.sock 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 4040329 ']' 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:03.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:03.829 10:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:03.829 [2024-07-25 10:43:07.438719] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:03.829 [2024-07-25 10:43:07.438779] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4040329 ] 00:28:03.829 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:03.829 Zero copy mechanism will not be used. 00:28:03.829 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.829 [2024-07-25 10:43:07.508344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.088 [2024-07-25 10:43:07.582596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.656 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:04.656 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:04.656 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:04.656 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:04.656 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:04.914 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:04.914 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:05.172 nvme0n1 00:28:05.172 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:05.172 10:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:05.172 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:05.172 Zero copy mechanism will not be used. 00:28:05.172 Running I/O for 2 seconds... 00:28:07.702 00:28:07.702 Latency(us) 00:28:07.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.702 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:07.702 nvme0n1 : 2.00 4286.93 535.87 0.00 0.00 3729.86 629.15 9594.47 00:28:07.702 =================================================================================================================== 00:28:07.702 Total : 4286.93 535.87 0.00 0.00 3729.86 629.15 9594.47 00:28:07.702 0 00:28:07.702 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:07.702 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:07.702 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:07.702 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:07.702 | select(.opcode=="crc32c") 00:28:07.702 | "\(.module_name) \(.executed)"' 00:28:07.702 10:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4040329 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 4040329 ']' 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 4040329 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4040329 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4040329' 00:28:07.702 killing process with pid 4040329 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 4040329 00:28:07.702 Received shutdown signal, test time was about 2.000000 seconds 00:28:07.702 00:28:07.702 Latency(us) 00:28:07.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.702 =================================================================================================================== 00:28:07.702 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 4040329 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4040906 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4040906 /var/tmp/bperf.sock 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 4040906 ']' 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:07.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:07.702 10:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:07.702 [2024-07-25 10:43:11.317073] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:07.702 [2024-07-25 10:43:11.317125] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4040906 ] 00:28:07.702 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.702 [2024-07-25 10:43:11.385772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.960 [2024-07-25 10:43:11.449430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.527 10:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:08.527 10:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:08.527 10:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:08.527 10:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:08.527 10:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:08.785 10:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:08.785 10:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.045 nvme0n1 00:28:09.306 10:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:09.306 10:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:09.306 Running I/O for 2 seconds... 00:28:11.236 00:28:11.236 Latency(us) 00:28:11.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.236 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:11.236 nvme0n1 : 2.00 28161.38 110.01 0.00 0.00 4537.26 3853.52 16462.64 00:28:11.236 =================================================================================================================== 00:28:11.236 Total : 28161.38 110.01 0.00 0.00 4537.26 3853.52 16462.64 00:28:11.236 0 00:28:11.236 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:11.236 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:11.236 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:11.236 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:11.236 | select(.opcode=="crc32c") 00:28:11.236 | "\(.module_name) \(.executed)"' 00:28:11.236 10:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:11.494 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:11.494 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:11.494 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:11.494 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:11.494 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4040906 00:28:11.494 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 4040906 ']' 00:28:11.494 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 4040906 00:28:11.494 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:11.494 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:11.494 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4040906 00:28:11.494 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:11.494 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:11.494 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4040906' 00:28:11.494 killing process with pid 4040906 00:28:11.494 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 4040906 00:28:11.494 Received shutdown signal, test time was about 2.000000 seconds 00:28:11.494 00:28:11.494 Latency(us) 00:28:11.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.494 =================================================================================================================== 00:28:11.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:11.494 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 4040906 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4041583 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4041583 /var/tmp/bperf.sock 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 4041583 ']' 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:11.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:11.753 10:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:11.753 [2024-07-25 10:43:15.296038] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:11.753 [2024-07-25 10:43:15.296092] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041583 ] 00:28:11.753 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:11.753 Zero copy mechanism will not be used. 00:28:11.753 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.753 [2024-07-25 10:43:15.366204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.753 [2024-07-25 10:43:15.440877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.687 10:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.687 10:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:12.687 10:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:12.687 10:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:12.687 10:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:12.687 10:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.687 10:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.255 nvme0n1 00:28:13.255 10:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:13.255 10:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:13.255 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:13.255 Zero copy mechanism will not be used. 00:28:13.255 Running I/O for 2 seconds... 00:28:15.161 00:28:15.161 Latency(us) 00:28:15.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.161 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:15.161 nvme0n1 : 2.00 4129.41 516.18 0.00 0.00 3869.81 2319.97 20132.66 00:28:15.161 =================================================================================================================== 00:28:15.161 Total : 4129.41 516.18 0.00 0.00 3869.81 2319.97 20132.66 00:28:15.161 0 00:28:15.161 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:15.161 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:15.161 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:15.161 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:15.161 | select(.opcode=="crc32c") 00:28:15.161 | "\(.module_name) \(.executed)"' 00:28:15.161 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:15.419 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:15.419 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:15.419 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:15.419 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:15.419 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4041583 00:28:15.419 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 4041583 ']' 00:28:15.419 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 4041583 00:28:15.419 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:15.419 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:15.419 10:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4041583 00:28:15.420 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:15.420 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:15.420 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4041583' 00:28:15.420 killing process with pid 4041583 00:28:15.420 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 4041583 00:28:15.420 Received shutdown signal, test time was about 2.000000 seconds 00:28:15.420 00:28:15.420 Latency(us) 00:28:15.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.420 =================================================================================================================== 00:28:15.420 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:15.420 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 4041583 00:28:15.678 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 4039432 00:28:15.678 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 4039432 ']' 00:28:15.678 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 4039432 00:28:15.678 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:15.678 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:15.678 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4039432 00:28:15.678 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:15.678 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:15.678 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4039432' 00:28:15.678 killing process with pid 4039432 00:28:15.678 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 4039432 00:28:15.678 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 4039432 00:28:15.937 00:28:15.937 real 0m17.052s 00:28:15.937 user 0m32.073s 00:28:15.937 sys 0m5.007s 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:15.937 ************************************ 00:28:15.937 END TEST nvmf_digest_clean 00:28:15.937 ************************************ 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.937 ************************************ 00:28:15.937 START TEST nvmf_digest_error 00:28:15.937 ************************************ 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=4042272 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 4042272 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 4042272 ']' 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.937 10:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.937 [2024-07-25 10:43:19.560940] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:15.937 [2024-07-25 10:43:19.560981] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.937 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.937 [2024-07-25 10:43:19.633427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.195 [2024-07-25 10:43:19.707328] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.196 [2024-07-25 10:43:19.707367] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.196 [2024-07-25 10:43:19.707376] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.196 [2024-07-25 10:43:19.707385] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.196 [2024-07-25 10:43:19.707393] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.196 [2024-07-25 10:43:19.707413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.763 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:16.763 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:16.763 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:16.763 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:16.763 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.763 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.763 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:16.763 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.763 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.763 [2024-07-25 10:43:20.417500] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:16.763 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.763 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:16.763 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:16.763 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.763 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.022 null0 00:28:17.022 [2024-07-25 10:43:20.507085] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.022 [2024-07-25 10:43:20.531280] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.022 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.022 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:17.022 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:17.022 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:17.022 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:17.022 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:17.022 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4042548 00:28:17.022 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4042548 /var/tmp/bperf.sock 00:28:17.022 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 4042548 ']' 00:28:17.022 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:17.022 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:17.022 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:17.022 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:17.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:17.022 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:17.022 10:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.022 [2024-07-25 10:43:20.568825] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:17.022 [2024-07-25 10:43:20.568871] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4042548 ] 00:28:17.022 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.022 [2024-07-25 10:43:20.636648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.022 [2024-07-25 10:43:20.705379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.960 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:17.960 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:17.960 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:17.960 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:17.960 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:17.960 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.960 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.960 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.960 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.960 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.219 nvme0n1 00:28:18.219 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:18.219 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.219 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:18.219 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.219 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:18.219 10:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:18.219 Running I/O for 2 seconds... 00:28:18.219 [2024-07-25 10:43:21.906398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.219 [2024-07-25 10:43:21.906432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.219 [2024-07-25 10:43:21.906445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.219 [2024-07-25 10:43:21.915420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.219 [2024-07-25 10:43:21.915447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.219 [2024-07-25 10:43:21.915459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:21.924555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:21.924579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:21.924591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:21.933590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:21.933612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:21.933624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:21.942090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:21.942112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:21.942122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:21.951553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:21.951576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:21.951587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:21.961139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:21.961161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:21.961172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:21.970074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:21.970101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:21.970112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:21.978967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:21.978988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:21.978999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:21.988736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:21.988758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:21.988768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:21.996901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:21.996923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:21.996933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:22.007321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:22.007341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:22.007352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:22.016673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:22.016694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:22.016705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:22.026544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:22.026565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:22.026576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:22.034853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:22.034874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:22.034885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:22.044168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:22.044189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:22.044200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:22.052936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:22.052957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:22.052968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:22.061514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:22.061536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:22.061546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:22.070720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:22.070741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:22.070752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:22.079259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:22.079282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:22.079292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:22.088968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:22.088990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:22.089000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:22.097230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:22.097252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:22.097263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:22.107593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:22.107615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:22.107625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.479 [2024-07-25 10:43:22.116320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.479 [2024-07-25 10:43:22.116341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.479 [2024-07-25 10:43:22.116352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.480 [2024-07-25 10:43:22.124867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.480 [2024-07-25 10:43:22.124888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.480 [2024-07-25 10:43:22.124902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.480 [2024-07-25 10:43:22.134467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.480 [2024-07-25 10:43:22.134488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.480 [2024-07-25 10:43:22.134499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.480 [2024-07-25 10:43:22.143218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.480 [2024-07-25 10:43:22.143239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.480 [2024-07-25 10:43:22.143250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.480 [2024-07-25 10:43:22.152816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.480 [2024-07-25 10:43:22.152837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.480 [2024-07-25 10:43:22.152847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.480 [2024-07-25 10:43:22.161625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.480 [2024-07-25 10:43:22.161646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.480 [2024-07-25 10:43:22.161657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.480 [2024-07-25 10:43:22.170676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.480 [2024-07-25 10:43:22.170697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.480 [2024-07-25 10:43:22.170707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.480 [2024-07-25 10:43:22.178995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.480 [2024-07-25 10:43:22.179017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.480 [2024-07-25 10:43:22.179028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.739 [2024-07-25 10:43:22.188993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.739 [2024-07-25 10:43:22.189015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.739 [2024-07-25 10:43:22.189026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.197484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.197504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.197515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.206356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.206377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.206387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.215347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.215368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.215379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.224673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.224695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.224705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.235033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.235055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.235065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.243985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.244007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.244017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.254440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.254461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.254471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.262776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.262797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.262808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.272868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.272896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.272906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.281179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.281200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.281216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.290668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.290689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.290699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.299327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.299348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.299359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.307178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.307198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.307209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.317038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.317059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.317070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.326782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.326804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.326814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.336240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.336261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.336271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.344410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.344432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.344443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.353685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.353706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.353722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.362753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.362778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.362788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.371094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.371115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.371126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.381719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.381741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.381752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.390782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.390804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.390814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.400172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.400194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.400204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.408127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.408148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.408159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.418338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.418360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.418370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.427075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.427096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.740 [2024-07-25 10:43:22.427106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.740 [2024-07-25 10:43:22.436516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:18.740 [2024-07-25 10:43:22.436538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.741 [2024-07-25 10:43:22.436548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.000 [2024-07-25 10:43:22.445655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.000 [2024-07-25 10:43:22.445676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.000 [2024-07-25 10:43:22.445687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.000 [2024-07-25 10:43:22.453828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.000 [2024-07-25 10:43:22.453850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.000 [2024-07-25 10:43:22.453860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.000 [2024-07-25 10:43:22.464580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.000 [2024-07-25 10:43:22.464602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.464612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.472407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.472428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.472438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.482902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.482923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.482933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.490838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.490858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.490868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.501078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.501099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.501110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.509138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.509159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.509169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.517814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.517835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.517848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.527368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.527389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.527399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.536091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.536112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.536122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.544701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.544728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.544738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.553811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.553833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.553843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.562529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.562550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.562560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.571353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.571374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.571384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.579876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.579897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.579907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.589475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.589496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.589507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.598516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.598538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.598549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.608329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.608352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.608363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.616002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.616023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.616034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.625926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.625947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.625958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.635532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.635553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.635564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.645230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.645252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.645262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.654542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.654563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.654573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.662312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.662334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.662345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.672298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.672320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.672335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.680620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.680641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.680651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.689644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.001 [2024-07-25 10:43:22.689666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.001 [2024-07-25 10:43:22.689677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.001 [2024-07-25 10:43:22.698772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.002 [2024-07-25 10:43:22.698793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.002 [2024-07-25 10:43:22.698803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.707961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.707983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.707993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.717144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.717166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.717177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.727876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.727898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.727908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.735727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.735749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.735759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.745540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.745561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.745571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.753571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.753595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.753606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.763153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.763174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.763184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.772417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.772439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.772449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.780837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.780859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.780869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.790728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.790749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.790759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.798670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.798691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.798701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.807457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.807478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.807488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.816263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.816284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.816294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.825936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.825958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.825968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.834416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.834438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.834448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.843180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.843201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.843211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.851854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.851875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.851885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.861120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.861141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.262 [2024-07-25 10:43:22.861152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.262 [2024-07-25 10:43:22.870586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.262 [2024-07-25 10:43:22.870607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.263 [2024-07-25 10:43:22.870617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.263 [2024-07-25 10:43:22.879035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.263 [2024-07-25 10:43:22.879056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.263 [2024-07-25 10:43:22.879067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.263 [2024-07-25 10:43:22.887264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.263 [2024-07-25 10:43:22.887287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.263 [2024-07-25 10:43:22.887297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.263 [2024-07-25 10:43:22.897021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.263 [2024-07-25 10:43:22.897042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.263 [2024-07-25 10:43:22.897053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.263 [2024-07-25 10:43:22.906912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.263 [2024-07-25 10:43:22.906934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.263 [2024-07-25 10:43:22.906949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.263 [2024-07-25 10:43:22.914849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.263 [2024-07-25 10:43:22.914871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.263 [2024-07-25 10:43:22.914882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.263 [2024-07-25 10:43:22.924327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.263 [2024-07-25 10:43:22.924349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.263 [2024-07-25 10:43:22.924360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.263 [2024-07-25 10:43:22.933836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.263 [2024-07-25 10:43:22.933858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.263 [2024-07-25 10:43:22.933868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.263 [2024-07-25 10:43:22.943135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.263 [2024-07-25 10:43:22.943158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.263 [2024-07-25 10:43:22.943168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.263 [2024-07-25 10:43:22.951816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.263 [2024-07-25 10:43:22.951838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.263 [2024-07-25 10:43:22.951848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.263 [2024-07-25 10:43:22.959968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.263 [2024-07-25 10:43:22.959990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.263 [2024-07-25 10:43:22.960000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:22.970059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:22.970083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:22.970094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:22.979486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:22.979509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:22.979519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:22.987971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:22.987996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:22.988006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:22.996201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:22.996223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:22.996233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.005858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.005882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.005892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.014861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.014884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.014894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.023300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.023322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.023333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.033455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.033477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.033488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.041515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.041538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.041548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.050659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.050681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.050692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.059312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.059334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.059344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.068713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.068746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.068757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.077284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.077306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.077317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.086090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.086112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.086122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.094908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.094930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.094940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.103468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.103489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.103500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.114017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.114039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.114050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.121959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.121981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.121991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.131925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.131947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.131957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.141599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.141624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.141635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.150551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.150573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.150584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.159104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.159126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.159137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.168309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.168331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.168342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.177752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.177775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.177785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.187282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.523 [2024-07-25 10:43:23.187303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.523 [2024-07-25 10:43:23.187314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.523 [2024-07-25 10:43:23.195976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.524 [2024-07-25 10:43:23.195998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.524 [2024-07-25 10:43:23.196009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.524 [2024-07-25 10:43:23.205737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.524 [2024-07-25 10:43:23.205760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.524 [2024-07-25 10:43:23.205770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.524 [2024-07-25 10:43:23.213741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.524 [2024-07-25 10:43:23.213762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.524 [2024-07-25 10:43:23.213773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.524 [2024-07-25 10:43:23.222891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.524 [2024-07-25 10:43:23.222913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.524 [2024-07-25 10:43:23.222923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.790 [2024-07-25 10:43:23.231542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.790 [2024-07-25 10:43:23.231564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.790 [2024-07-25 10:43:23.231574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.790 [2024-07-25 10:43:23.241423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.790 [2024-07-25 10:43:23.241445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.790 [2024-07-25 10:43:23.241455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.790 [2024-07-25 10:43:23.251385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.790 [2024-07-25 10:43:23.251407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.790 [2024-07-25 10:43:23.251418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.790 [2024-07-25 10:43:23.259666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.790 [2024-07-25 10:43:23.259687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.790 [2024-07-25 10:43:23.259698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.790 [2024-07-25 10:43:23.269038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.790 [2024-07-25 10:43:23.269060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.790 [2024-07-25 10:43:23.269070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.790 [2024-07-25 10:43:23.278271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.790 [2024-07-25 10:43:23.278293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.790 [2024-07-25 10:43:23.278304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.790 [2024-07-25 10:43:23.286057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.790 [2024-07-25 10:43:23.286078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.790 [2024-07-25 10:43:23.286089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.790 [2024-07-25 10:43:23.295426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.790 [2024-07-25 10:43:23.295448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.790 [2024-07-25 10:43:23.295461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.790 [2024-07-25 10:43:23.305634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.790 [2024-07-25 10:43:23.305657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.790 [2024-07-25 10:43:23.305667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.790 [2024-07-25 10:43:23.314252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.790 [2024-07-25 10:43:23.314273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.790 [2024-07-25 10:43:23.314284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.790 [2024-07-25 10:43:23.322764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.790 [2024-07-25 10:43:23.322786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.790 [2024-07-25 10:43:23.322796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.790 [2024-07-25 10:43:23.332099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.790 [2024-07-25 10:43:23.332120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.790 [2024-07-25 10:43:23.332131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.790 [2024-07-25 10:43:23.340440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.790 [2024-07-25 10:43:23.340461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.790 [2024-07-25 10:43:23.340472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.349343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.349366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.349377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.358901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.358923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.358933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.367068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.367090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.367100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.376521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.376546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.376556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.385187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.385209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.385219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.393262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.393283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.393294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.402622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.402645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.402655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.412570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.412593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.412603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.420342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.420364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.420374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.429693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.429723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.429734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.438036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.438058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.438069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.447668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.447690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.447701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.457233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.457256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.457266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.466124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.466146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.466157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.474219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.474242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.474252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.791 [2024-07-25 10:43:23.483975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:19.791 [2024-07-25 10:43:23.483999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.791 [2024-07-25 10:43:23.484011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.051 [2024-07-25 10:43:23.491863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.051 [2024-07-25 10:43:23.491886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.051 [2024-07-25 10:43:23.491897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.051 [2024-07-25 10:43:23.501482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.051 [2024-07-25 10:43:23.501505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.051 [2024-07-25 10:43:23.501516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.051 [2024-07-25 10:43:23.511648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.051 [2024-07-25 10:43:23.511670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.051 [2024-07-25 10:43:23.511681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.051 [2024-07-25 10:43:23.520451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.520474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.520484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.530279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.530303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.530318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.539366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.539389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.539400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.548091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.548114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.548124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.558325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.558348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.558359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.567014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.567036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.567047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.576183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.576205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.576216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.584423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.584445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.584456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.593940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.593961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.593972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.603392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.603414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.603424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.612422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.612443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.612454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.620612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.620633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.620644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.629756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.629777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.629788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.637820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.637841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.637851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.647632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.647654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.647664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.656357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.656378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.656388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.665339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.665360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.665371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.672886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.672909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.672920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.683033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.683057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.683072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.692611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.692635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.692645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.700927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.700949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.700960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.709973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.709994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.710005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.718341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.718362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.718372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.726967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.726989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.726999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.735846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.735867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.735878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.744656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.744677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.744688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.052 [2024-07-25 10:43:23.753454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.052 [2024-07-25 10:43:23.753475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.052 [2024-07-25 10:43:23.753486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.311 [2024-07-25 10:43:23.762610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.311 [2024-07-25 10:43:23.762635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.311 [2024-07-25 10:43:23.762645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.311 [2024-07-25 10:43:23.771833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.311 [2024-07-25 10:43:23.771854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.311 [2024-07-25 10:43:23.771864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.311 [2024-07-25 10:43:23.779820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.311 [2024-07-25 10:43:23.779842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.311 [2024-07-25 10:43:23.779853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.311 [2024-07-25 10:43:23.789789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.311 [2024-07-25 10:43:23.789811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.311 [2024-07-25 10:43:23.789822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.311 [2024-07-25 10:43:23.798096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.311 [2024-07-25 10:43:23.798117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.311 [2024-07-25 10:43:23.798127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.311 [2024-07-25 10:43:23.807690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.311 [2024-07-25 10:43:23.807711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.311 [2024-07-25 10:43:23.807732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.311 [2024-07-25 10:43:23.815470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.311 [2024-07-25 10:43:23.815490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.311 [2024-07-25 10:43:23.815500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.311 [2024-07-25 10:43:23.825122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.311 [2024-07-25 10:43:23.825143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.311 [2024-07-25 10:43:23.825153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.311 [2024-07-25 10:43:23.834069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.311 [2024-07-25 10:43:23.834089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.311 [2024-07-25 10:43:23.834100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.311 [2024-07-25 10:43:23.842529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.311 [2024-07-25 10:43:23.842550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.311 [2024-07-25 10:43:23.842561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.311 [2024-07-25 10:43:23.852271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.311 [2024-07-25 10:43:23.852292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.312 [2024-07-25 10:43:23.852302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.312 [2024-07-25 10:43:23.859874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.312 [2024-07-25 10:43:23.859895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.312 [2024-07-25 10:43:23.859906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.312 [2024-07-25 10:43:23.871233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.312 [2024-07-25 10:43:23.871256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.312 [2024-07-25 10:43:23.871267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.312 [2024-07-25 10:43:23.879107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.312 [2024-07-25 10:43:23.879128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.312 [2024-07-25 10:43:23.879139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.312 [2024-07-25 10:43:23.888112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e1c0) 00:28:20.312 [2024-07-25 10:43:23.888134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.312 [2024-07-25 10:43:23.888144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.312 00:28:20.312 Latency(us) 00:28:20.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.312 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:20.312 nvme0n1 : 2.00 28102.70 109.78 0.00 0.00 4550.05 2136.47 14784.92 00:28:20.312 =================================================================================================================== 00:28:20.312 Total : 28102.70 109.78 0.00 0.00 4550.05 2136.47 14784.92 00:28:20.312 0 00:28:20.312 10:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:20.312 10:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:20.312 10:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:20.312 | .driver_specific 00:28:20.312 | .nvme_error 00:28:20.312 | .status_code 00:28:20.312 | .command_transient_transport_error' 00:28:20.312 10:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:20.570 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:28:20.570 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4042548 00:28:20.570 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 4042548 ']' 00:28:20.570 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 4042548 00:28:20.570 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:20.570 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:20.570 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4042548 00:28:20.570 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:20.570 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:20.571 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4042548' 00:28:20.571 killing process with pid 4042548 00:28:20.571 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 4042548 00:28:20.571 Received shutdown signal, test time was about 2.000000 seconds 00:28:20.571 00:28:20.571 Latency(us) 00:28:20.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.571 =================================================================================================================== 00:28:20.571 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.571 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 4042548 00:28:20.830 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:20.830 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:20.830 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:20.830 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:20.830 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:20.830 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4043099 00:28:20.830 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4043099 /var/tmp/bperf.sock 00:28:20.830 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 4043099 ']' 00:28:20.830 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:20.830 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:20.830 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:20.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:20.830 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:20.830 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.830 10:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:20.830 [2024-07-25 10:43:24.363415] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:20.830 [2024-07-25 10:43:24.363469] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043099 ] 00:28:20.830 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:20.830 Zero copy mechanism will not be used. 00:28:20.830 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.830 [2024-07-25 10:43:24.432217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.830 [2024-07-25 10:43:24.501480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.767 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:21.767 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:21.767 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:21.767 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:21.767 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:21.767 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.767 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.767 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.767 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.767 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.026 nvme0n1 00:28:22.026 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:22.026 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.026 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.026 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.026 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:22.026 10:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:22.026 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:22.026 Zero copy mechanism will not be used. 00:28:22.026 Running I/O for 2 seconds... 00:28:22.026 [2024-07-25 10:43:25.654842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.026 [2024-07-25 10:43:25.654878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.027 [2024-07-25 10:43:25.654890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.027 [2024-07-25 10:43:25.665801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.027 [2024-07-25 10:43:25.665827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.027 [2024-07-25 10:43:25.665838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.027 [2024-07-25 10:43:25.674882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.027 [2024-07-25 10:43:25.674908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.027 [2024-07-25 10:43:25.674919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.027 [2024-07-25 10:43:25.682959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.027 [2024-07-25 10:43:25.682982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.027 [2024-07-25 10:43:25.682992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.027 [2024-07-25 10:43:25.689845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.027 [2024-07-25 10:43:25.689867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.027 [2024-07-25 10:43:25.689877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.027 [2024-07-25 10:43:25.696448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.027 [2024-07-25 10:43:25.696469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.027 [2024-07-25 10:43:25.696480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.027 [2024-07-25 10:43:25.702897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.027 [2024-07-25 10:43:25.702919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.027 [2024-07-25 10:43:25.702929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.027 [2024-07-25 10:43:25.709280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.027 [2024-07-25 10:43:25.709302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.027 [2024-07-25 10:43:25.709312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.027 [2024-07-25 10:43:25.715625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.027 [2024-07-25 10:43:25.715646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.027 [2024-07-25 10:43:25.715657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.027 [2024-07-25 10:43:25.722053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.027 [2024-07-25 10:43:25.722076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.027 [2024-07-25 10:43:25.722086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.027 [2024-07-25 10:43:25.729068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.027 [2024-07-25 10:43:25.729094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.027 [2024-07-25 10:43:25.729105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.735564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.735588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.735599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.742135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.742158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.742169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.748668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.748690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.748701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.755229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.755251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.755262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.761697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.761724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.761735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.768037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.768059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.768070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.774435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.774457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.774468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.780772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.780794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.780805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.787138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.787160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.787174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.793662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.793684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.793694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.800088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.800109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.800120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.806495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.806516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.806527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.812974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.812996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.813006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.819324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.819346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.819357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.825661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.825682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.825693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.832029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.832049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.832060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.287 [2024-07-25 10:43:25.838372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.287 [2024-07-25 10:43:25.838394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.287 [2024-07-25 10:43:25.838404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.844802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.844824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.844834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.851162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.851185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.851196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.857544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.857566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.857577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.864034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.864057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.864067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.870404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.870426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.870436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.876793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.876815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.876826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.883249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.883272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.883282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.889627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.889650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.889661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.896152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.896175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.896188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.902680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.902702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.902713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.909189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.909212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.909222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.915859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.915882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.915893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.922291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.922313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.922323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.928696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.928725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.928739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.935099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.935121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.935131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.941468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.941490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.941500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.947888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.947912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.947922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.954384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.954410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.954420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.960886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.960910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.960920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.967360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.967383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.967393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.973942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.973966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.973976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.980464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.980487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.980497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.288 [2024-07-25 10:43:25.987012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.288 [2024-07-25 10:43:25.987035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.288 [2024-07-25 10:43:25.987046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.549 [2024-07-25 10:43:25.993566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.549 [2024-07-25 10:43:25.993591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.549 [2024-07-25 10:43:25.993601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.549 [2024-07-25 10:43:25.999175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.549 [2024-07-25 10:43:25.999198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.549 [2024-07-25 10:43:25.999208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.549 [2024-07-25 10:43:26.005708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.005736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.005747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.012158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.012181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.012191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.018741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.018763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.018773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.025210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.025233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.025243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.031687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.031710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.031727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.038216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.038239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.038249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.044694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.044723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.044737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.051194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.051216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.051227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.057732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.057753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.057763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.064277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.064299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.064312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.070760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.070783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.070793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.077404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.077427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.077437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.083955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.083978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.083988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.090375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.090397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.090408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.096874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.096897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.096908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.103489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.103513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.103524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.109921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.109951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.109961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.116303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.116326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.116337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.122790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.122816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.122826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.129221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.129244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.129255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.135710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.135738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.135748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.142198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.142222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.142233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.148674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.148697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.148708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.155191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.155214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.155224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.161644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.161667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.161677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.168201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.168225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.168235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.550 [2024-07-25 10:43:26.174743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.550 [2024-07-25 10:43:26.174765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.550 [2024-07-25 10:43:26.174775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.551 [2024-07-25 10:43:26.180615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.551 [2024-07-25 10:43:26.180638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.551 [2024-07-25 10:43:26.180649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.551 [2024-07-25 10:43:26.187153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.551 [2024-07-25 10:43:26.187177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.551 [2024-07-25 10:43:26.187187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.551 [2024-07-25 10:43:26.193628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.551 [2024-07-25 10:43:26.193652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.551 [2024-07-25 10:43:26.193663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.551 [2024-07-25 10:43:26.200192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.551 [2024-07-25 10:43:26.200215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.551 [2024-07-25 10:43:26.200225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.551 [2024-07-25 10:43:26.206795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.551 [2024-07-25 10:43:26.206818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.551 [2024-07-25 10:43:26.206829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.551 [2024-07-25 10:43:26.213311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.551 [2024-07-25 10:43:26.213334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.551 [2024-07-25 10:43:26.213344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.551 [2024-07-25 10:43:26.219825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.551 [2024-07-25 10:43:26.219849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.551 [2024-07-25 10:43:26.219859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.551 [2024-07-25 10:43:26.226297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.551 [2024-07-25 10:43:26.226321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.551 [2024-07-25 10:43:26.226331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.551 [2024-07-25 10:43:26.232764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.551 [2024-07-25 10:43:26.232787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.551 [2024-07-25 10:43:26.232804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.551 [2024-07-25 10:43:26.239146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.551 [2024-07-25 10:43:26.239170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.551 [2024-07-25 10:43:26.239181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.551 [2024-07-25 10:43:26.245600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.551 [2024-07-25 10:43:26.245624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.551 [2024-07-25 10:43:26.245634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.252059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.252084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.811 [2024-07-25 10:43:26.252096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.258502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.258526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.811 [2024-07-25 10:43:26.258537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.265112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.265136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.811 [2024-07-25 10:43:26.265147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.271626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.271647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.811 [2024-07-25 10:43:26.271658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.278219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.278242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.811 [2024-07-25 10:43:26.278252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.284674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.284697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.811 [2024-07-25 10:43:26.284707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.291098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.291121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.811 [2024-07-25 10:43:26.291131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.297475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.297498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.811 [2024-07-25 10:43:26.297509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.303877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.303899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.811 [2024-07-25 10:43:26.303909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.310241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.310265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.811 [2024-07-25 10:43:26.310276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.316742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.316765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.811 [2024-07-25 10:43:26.316776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.323194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.323217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.811 [2024-07-25 10:43:26.323228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.329275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.329299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.811 [2024-07-25 10:43:26.329310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.335830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.335854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.811 [2024-07-25 10:43:26.335865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.342209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.342233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.811 [2024-07-25 10:43:26.342246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.811 [2024-07-25 10:43:26.348668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.811 [2024-07-25 10:43:26.348692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.348702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.355110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.355133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.355144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.361449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.361472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.361482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.368108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.368132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.368141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.374599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.374623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.374633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.381104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.381126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.381136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.387549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.387572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.387583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.394013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.394036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.394046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.400386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.400412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.400423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.406768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.406790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.406800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.413135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.413159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.413169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.419549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.419573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.419583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.426041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.426064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.426075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.432454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.432478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.432488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.438939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.438962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.438973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.445426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.445450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.445460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.451853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.451876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.451886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.458245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.458268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.458278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.464637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.464661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.464671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.471032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.471055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.471065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.477418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.477441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.477451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.484058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.484081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.484092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.490183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.490208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.490218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.496773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.496797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.496808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.503199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.503222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.503232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.506568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.506591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.506605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.812 [2024-07-25 10:43:26.513026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:22.812 [2024-07-25 10:43:26.513050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.812 [2024-07-25 10:43:26.513060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.072 [2024-07-25 10:43:26.519579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.072 [2024-07-25 10:43:26.519604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.072 [2024-07-25 10:43:26.519614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.072 [2024-07-25 10:43:26.525602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.072 [2024-07-25 10:43:26.525627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.072 [2024-07-25 10:43:26.525638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.072 [2024-07-25 10:43:26.532107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.072 [2024-07-25 10:43:26.532131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.072 [2024-07-25 10:43:26.532142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.072 [2024-07-25 10:43:26.538552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.072 [2024-07-25 10:43:26.538576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.072 [2024-07-25 10:43:26.538586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.072 [2024-07-25 10:43:26.544988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.072 [2024-07-25 10:43:26.545011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.072 [2024-07-25 10:43:26.545022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.072 [2024-07-25 10:43:26.551455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.072 [2024-07-25 10:43:26.551479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.072 [2024-07-25 10:43:26.551489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.072 [2024-07-25 10:43:26.558020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.072 [2024-07-25 10:43:26.558044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.072 [2024-07-25 10:43:26.558054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.564484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.564511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.564522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.570707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.570736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.570747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.577104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.577128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.577138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.583597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.583620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.583630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.590109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.590133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.590144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.596600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.596623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.596634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.603132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.603155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.603165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.609632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.609656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.609667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.616092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.616116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.616127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.622649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.622672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.622683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.629172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.629195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.629206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.635681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.635704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.635722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.642167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.642191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.642201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.648784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.648806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.648816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.655310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.655333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.655343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.661888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.661912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.661922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.668431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.668455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.668465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.675051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.675075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.675088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.681555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.681579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.681589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.687566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.687590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.687600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.694189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.694213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.694223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.700413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.700437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.700448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.706960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.706984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.706994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.713406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.713429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.713439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.719927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.719951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.719961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.726341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.726364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.073 [2024-07-25 10:43:26.726375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.073 [2024-07-25 10:43:26.732865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.073 [2024-07-25 10:43:26.732889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.074 [2024-07-25 10:43:26.732899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.074 [2024-07-25 10:43:26.739305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.074 [2024-07-25 10:43:26.739328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.074 [2024-07-25 10:43:26.739339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.074 [2024-07-25 10:43:26.745693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.074 [2024-07-25 10:43:26.745724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.074 [2024-07-25 10:43:26.745736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.074 [2024-07-25 10:43:26.752073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.074 [2024-07-25 10:43:26.752098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.074 [2024-07-25 10:43:26.752109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.074 [2024-07-25 10:43:26.758492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.074 [2024-07-25 10:43:26.758516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.074 [2024-07-25 10:43:26.758526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.074 [2024-07-25 10:43:26.764905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.074 [2024-07-25 10:43:26.764929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.074 [2024-07-25 10:43:26.764940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.074 [2024-07-25 10:43:26.771316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.074 [2024-07-25 10:43:26.771339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.074 [2024-07-25 10:43:26.771350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.371 [2024-07-25 10:43:26.777828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.371 [2024-07-25 10:43:26.777853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.371 [2024-07-25 10:43:26.777864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.371 [2024-07-25 10:43:26.784478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.371 [2024-07-25 10:43:26.784501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.371 [2024-07-25 10:43:26.784515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.371 [2024-07-25 10:43:26.790827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.371 [2024-07-25 10:43:26.790850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.371 [2024-07-25 10:43:26.790860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.371 [2024-07-25 10:43:26.794307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.371 [2024-07-25 10:43:26.794330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.371 [2024-07-25 10:43:26.794341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.371 [2024-07-25 10:43:26.801389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.371 [2024-07-25 10:43:26.801411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.371 [2024-07-25 10:43:26.801421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.371 [2024-07-25 10:43:26.809583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.371 [2024-07-25 10:43:26.809607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.371 [2024-07-25 10:43:26.809618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.371 [2024-07-25 10:43:26.819022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.371 [2024-07-25 10:43:26.819046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.371 [2024-07-25 10:43:26.819057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.371 [2024-07-25 10:43:26.826702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.371 [2024-07-25 10:43:26.826730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.371 [2024-07-25 10:43:26.826741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.371 [2024-07-25 10:43:26.833544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.371 [2024-07-25 10:43:26.833567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.371 [2024-07-25 10:43:26.833577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.371 [2024-07-25 10:43:26.840745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.371 [2024-07-25 10:43:26.840767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.371 [2024-07-25 10:43:26.840777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.371 [2024-07-25 10:43:26.851566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.851590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.851601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:26.863285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.863308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.863318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:26.872505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.872528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.872539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:26.881313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.881337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.881347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:26.892603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.892626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.892637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:26.904770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.904792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.904802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:26.914725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.914747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.914757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:26.923508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.923531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.923541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:26.930746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.930768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.930778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:26.937502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.937525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.937535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:26.944692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.944719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.944730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:26.952650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.952673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.952683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:26.961814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.961836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.961846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:26.972372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.972394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.972405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:26.983669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.983692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.983703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:26.995139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:26.995161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:26.995172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:27.006161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:27.006184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:27.006195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:27.020019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:27.020043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:27.020057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:27.030948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:27.030972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:27.030983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:27.041638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:27.041662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:27.041673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:27.050300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:27.050324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:27.050335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:27.057969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:27.057992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:27.058002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:27.065051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:27.065073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:27.065083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.372 [2024-07-25 10:43:27.072339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.372 [2024-07-25 10:43:27.072362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.372 [2024-07-25 10:43:27.072372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.632 [2024-07-25 10:43:27.078987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.632 [2024-07-25 10:43:27.079011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.632 [2024-07-25 10:43:27.079021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.632 [2024-07-25 10:43:27.085968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.632 [2024-07-25 10:43:27.085990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.632 [2024-07-25 10:43:27.086000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.632 [2024-07-25 10:43:27.092339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.632 [2024-07-25 10:43:27.092365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.632 [2024-07-25 10:43:27.092375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.632 [2024-07-25 10:43:27.100046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.632 [2024-07-25 10:43:27.100069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.632 [2024-07-25 10:43:27.100080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.632 [2024-07-25 10:43:27.107104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.632 [2024-07-25 10:43:27.107126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.632 [2024-07-25 10:43:27.107137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.632 [2024-07-25 10:43:27.113693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.632 [2024-07-25 10:43:27.113721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.632 [2024-07-25 10:43:27.113732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.632 [2024-07-25 10:43:27.120526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.632 [2024-07-25 10:43:27.120549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.632 [2024-07-25 10:43:27.120559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.632 [2024-07-25 10:43:27.127274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.632 [2024-07-25 10:43:27.127296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.632 [2024-07-25 10:43:27.127307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.632 [2024-07-25 10:43:27.138556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.632 [2024-07-25 10:43:27.138579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.632 [2024-07-25 10:43:27.138590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.632 [2024-07-25 10:43:27.149772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.632 [2024-07-25 10:43:27.149793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.632 [2024-07-25 10:43:27.149811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.632 [2024-07-25 10:43:27.159059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.632 [2024-07-25 10:43:27.159082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.632 [2024-07-25 10:43:27.159092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.632 [2024-07-25 10:43:27.171568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.171592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.171603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.182001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.182024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.182034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.191581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.191605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.191616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.201320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.201344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.201355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.210962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.210986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.210997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.220578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.220601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.220612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.230386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.230409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.230419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.239116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.239140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.239151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.253745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.253768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.253782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.265724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.265746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.265757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.276827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.276850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.276860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.285386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.285410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.285420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.296784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.296805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.296816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.308575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.308598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.308608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.318096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.318119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.318129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.326355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.326377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.326388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.633 [2024-07-25 10:43:27.333327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.633 [2024-07-25 10:43:27.333350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.633 [2024-07-25 10:43:27.333360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.339966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.339990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.340000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.352152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.352175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.352185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.362695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.362723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.362734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.371862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.371884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.371895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.379777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.379799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.379809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.388047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.388069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.388079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.395103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.395125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.395135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.402096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.402118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.402129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.409174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.409196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.409210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.416782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.416803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.416813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.423942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.423964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.423974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.430905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.430927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.430938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.444126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.444147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.444161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.454541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.454564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.454574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.463398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.463420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.463430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.471572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.471594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.471604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.478742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.478764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.478774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.485817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.485842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.485853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.492337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.492361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.893 [2024-07-25 10:43:27.492371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.893 [2024-07-25 10:43:27.498846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.893 [2024-07-25 10:43:27.498868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.894 [2024-07-25 10:43:27.498878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.894 [2024-07-25 10:43:27.505259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.894 [2024-07-25 10:43:27.505282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.894 [2024-07-25 10:43:27.505292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.894 [2024-07-25 10:43:27.511404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.894 [2024-07-25 10:43:27.511428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.894 [2024-07-25 10:43:27.511438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.894 [2024-07-25 10:43:27.518827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.894 [2024-07-25 10:43:27.518851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.894 [2024-07-25 10:43:27.518861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.894 [2024-07-25 10:43:27.526223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.894 [2024-07-25 10:43:27.526248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.894 [2024-07-25 10:43:27.526260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.894 [2024-07-25 10:43:27.533547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.894 [2024-07-25 10:43:27.533571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.894 [2024-07-25 10:43:27.533582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.894 [2024-07-25 10:43:27.540214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.894 [2024-07-25 10:43:27.540237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.894 [2024-07-25 10:43:27.540247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.894 [2024-07-25 10:43:27.546304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.894 [2024-07-25 10:43:27.546327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.894 [2024-07-25 10:43:27.546337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.894 [2024-07-25 10:43:27.552842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.894 [2024-07-25 10:43:27.552865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.894 [2024-07-25 10:43:27.552875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.894 [2024-07-25 10:43:27.559373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.894 [2024-07-25 10:43:27.559396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.894 [2024-07-25 10:43:27.559406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.894 [2024-07-25 10:43:27.566220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.894 [2024-07-25 10:43:27.566243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.894 [2024-07-25 10:43:27.566253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.894 [2024-07-25 10:43:27.574519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.894 [2024-07-25 10:43:27.574543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.894 [2024-07-25 10:43:27.574554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.894 [2024-07-25 10:43:27.583007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.894 [2024-07-25 10:43:27.583031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.894 [2024-07-25 10:43:27.583041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.894 [2024-07-25 10:43:27.591901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:23.894 [2024-07-25 10:43:27.591925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.894 [2024-07-25 10:43:27.591935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.153 [2024-07-25 10:43:27.600837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:24.153 [2024-07-25 10:43:27.600860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.153 [2024-07-25 10:43:27.600871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.153 [2024-07-25 10:43:27.610393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:24.153 [2024-07-25 10:43:27.610417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.153 [2024-07-25 10:43:27.610431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.153 [2024-07-25 10:43:27.620000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:24.153 [2024-07-25 10:43:27.620023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.153 [2024-07-25 10:43:27.620035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.153 [2024-07-25 10:43:27.629936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:24.153 [2024-07-25 10:43:27.629960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.153 [2024-07-25 10:43:27.629970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.153 [2024-07-25 10:43:27.641389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181bf0) 00:28:24.153 [2024-07-25 10:43:27.641412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.153 [2024-07-25 10:43:27.641423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.153 00:28:24.153 Latency(us) 00:28:24.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.153 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:24.153 nvme0n1 : 2.01 4228.13 528.52 0.00 0.00 3780.41 783.16 15414.07 00:28:24.153 =================================================================================================================== 00:28:24.153 Total : 4228.13 528.52 0.00 0.00 3780.41 783.16 15414.07 00:28:24.153 0 00:28:24.153 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:24.153 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:24.153 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:24.153 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:24.153 | .driver_specific 00:28:24.153 | .nvme_error 00:28:24.153 | .status_code 00:28:24.153 | .command_transient_transport_error' 00:28:24.153 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 273 > 0 )) 00:28:24.153 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4043099 00:28:24.153 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 4043099 ']' 00:28:24.153 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 4043099 00:28:24.153 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:24.153 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:24.154 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4043099 00:28:24.412 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:24.412 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:24.412 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4043099' 00:28:24.412 killing process with pid 4043099 00:28:24.412 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 4043099 00:28:24.412 Received shutdown signal, test time was about 2.000000 seconds 00:28:24.412 00:28:24.412 Latency(us) 00:28:24.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.412 =================================================================================================================== 00:28:24.412 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.412 10:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 4043099 00:28:24.412 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:24.412 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:24.412 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:24.412 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:24.412 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:24.412 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4043749 00:28:24.412 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4043749 /var/tmp/bperf.sock 00:28:24.412 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 4043749 ']' 00:28:24.412 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:24.412 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:24.412 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:24.412 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:24.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:24.413 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:24.413 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.413 [2024-07-25 10:43:28.100957] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:24.413 [2024-07-25 10:43:28.101012] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043749 ] 00:28:24.671 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.671 [2024-07-25 10:43:28.172021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.671 [2024-07-25 10:43:28.241199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.240 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:25.240 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:25.240 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:25.240 10:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:25.498 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:25.498 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.498 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.499 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.499 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.499 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.757 nvme0n1 00:28:25.757 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:25.757 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.757 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.757 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.757 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:25.757 10:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:25.757 Running I/O for 2 seconds... 00:28:25.757 [2024-07-25 10:43:29.425142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fe720 00:28:25.757 [2024-07-25 10:43:29.425789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.757 [2024-07-25 10:43:29.425817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.757 [2024-07-25 10:43:29.434639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:25.757 [2024-07-25 10:43:29.435470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.757 [2024-07-25 10:43:29.435493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.757 [2024-07-25 10:43:29.443255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:25.757 [2024-07-25 10:43:29.444089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.757 [2024-07-25 10:43:29.444111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.757 [2024-07-25 10:43:29.452296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:25.757 [2024-07-25 10:43:29.453186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.757 [2024-07-25 10:43:29.453207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.461281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.017 [2024-07-25 10:43:29.462179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.462199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.470214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.017 [2024-07-25 10:43:29.471074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.471099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.478988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.017 [2024-07-25 10:43:29.479849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.479869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.487775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.017 [2024-07-25 10:43:29.488616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.488635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.496565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.017 [2024-07-25 10:43:29.497425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.497445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.505308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.017 [2024-07-25 10:43:29.506166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.506187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.514044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.017 [2024-07-25 10:43:29.514901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.514921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.522792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.017 [2024-07-25 10:43:29.523643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.523662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.531487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.017 [2024-07-25 10:43:29.532374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.532395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.540284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.017 [2024-07-25 10:43:29.541142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.541162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.549069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.017 [2024-07-25 10:43:29.549933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.549952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.557801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.017 [2024-07-25 10:43:29.558651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.558670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.566518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.017 [2024-07-25 10:43:29.567384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.567404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.575261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.017 [2024-07-25 10:43:29.576140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.576161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.584160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.017 [2024-07-25 10:43:29.585050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.585070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.592968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.017 [2024-07-25 10:43:29.593818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.593839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.601737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.017 [2024-07-25 10:43:29.602587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.602607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.610462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.017 [2024-07-25 10:43:29.611318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.611338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.619191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.017 [2024-07-25 10:43:29.620049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.620069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.627882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.017 [2024-07-25 10:43:29.628736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.628773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.636664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.017 [2024-07-25 10:43:29.637523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-07-25 10:43:29.637543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-07-25 10:43:29.645415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.017 [2024-07-25 10:43:29.646274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.018 [2024-07-25 10:43:29.646294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.018 [2024-07-25 10:43:29.654078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.018 [2024-07-25 10:43:29.654979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.018 [2024-07-25 10:43:29.654999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.018 [2024-07-25 10:43:29.662836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.018 [2024-07-25 10:43:29.663691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.018 [2024-07-25 10:43:29.663711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.018 [2024-07-25 10:43:29.671591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.018 [2024-07-25 10:43:29.672449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.018 [2024-07-25 10:43:29.672469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.018 [2024-07-25 10:43:29.680295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.018 [2024-07-25 10:43:29.681151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.018 [2024-07-25 10:43:29.681171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.018 [2024-07-25 10:43:29.689266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.018 [2024-07-25 10:43:29.690127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.018 [2024-07-25 10:43:29.690147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.018 [2024-07-25 10:43:29.697956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.018 [2024-07-25 10:43:29.698805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.018 [2024-07-25 10:43:29.698828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.018 [2024-07-25 10:43:29.706639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.018 [2024-07-25 10:43:29.707493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.018 [2024-07-25 10:43:29.707513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.018 [2024-07-25 10:43:29.715408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.018 [2024-07-25 10:43:29.716299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.018 [2024-07-25 10:43:29.716320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-07-25 10:43:29.724278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.277 [2024-07-25 10:43:29.725169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-07-25 10:43:29.725189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-07-25 10:43:29.733084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.277 [2024-07-25 10:43:29.733941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-07-25 10:43:29.733961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-07-25 10:43:29.741791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.277 [2024-07-25 10:43:29.742642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-07-25 10:43:29.742661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-07-25 10:43:29.750486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.277 [2024-07-25 10:43:29.751374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.751394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.759282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.278 [2024-07-25 10:43:29.760143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.760163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.768060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.278 [2024-07-25 10:43:29.768927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.768947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.776829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.278 [2024-07-25 10:43:29.777674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.777694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.785605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.278 [2024-07-25 10:43:29.786466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.786486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.794297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.278 [2024-07-25 10:43:29.795154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.795174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.803034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.278 [2024-07-25 10:43:29.803892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.803912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.811762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.278 [2024-07-25 10:43:29.812609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.812629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.820450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.278 [2024-07-25 10:43:29.821336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.821356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.829223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.278 [2024-07-25 10:43:29.830087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.830107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.837985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.278 [2024-07-25 10:43:29.838816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.838835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.846677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.278 [2024-07-25 10:43:29.847532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.847551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.855417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.278 [2024-07-25 10:43:29.856283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.856303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.864124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.278 [2024-07-25 10:43:29.865022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.865042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.872894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.278 [2024-07-25 10:43:29.873741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.873761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.881870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.278 [2024-07-25 10:43:29.882724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.882744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.890592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.278 [2024-07-25 10:43:29.891447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.891467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.899308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.278 [2024-07-25 10:43:29.900201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.900221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.908053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.278 [2024-07-25 10:43:29.908927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.908948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.916772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.278 [2024-07-25 10:43:29.917621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.917640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.925529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.278 [2024-07-25 10:43:29.926383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.926409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.934224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.278 [2024-07-25 10:43:29.935118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.935139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.943249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.278 [2024-07-25 10:43:29.944125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.944145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.952043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.278 [2024-07-25 10:43:29.952913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.952933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.960784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.278 [2024-07-25 10:43:29.961632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.961651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.969542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.278 [2024-07-25 10:43:29.970400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-07-25 10:43:29.970420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-07-25 10:43:29.978308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.278 [2024-07-25 10:43:29.979186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.279 [2024-07-25 10:43:29.979206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:29.987175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.538 [2024-07-25 10:43:29.988028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:29.988048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:29.995913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.538 [2024-07-25 10:43:29.996774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:29.996794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.004776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.538 [2024-07-25 10:43:30.005649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.005670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.013686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.538 [2024-07-25 10:43:30.014563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.014584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.022648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.538 [2024-07-25 10:43:30.023522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.023542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.031565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.538 [2024-07-25 10:43:30.032442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.032463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.040519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.538 [2024-07-25 10:43:30.041394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.041414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.049488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.538 [2024-07-25 10:43:30.050363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.050384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.058411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.538 [2024-07-25 10:43:30.059284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.059304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.067350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.538 [2024-07-25 10:43:30.068224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.068243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.076291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.538 [2024-07-25 10:43:30.077164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.077184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.085097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.538 [2024-07-25 10:43:30.085986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.086007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.094084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.538 [2024-07-25 10:43:30.094960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.094981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.102958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.538 [2024-07-25 10:43:30.103832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.103851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.111688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.538 [2024-07-25 10:43:30.112542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.112562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.120394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.538 [2024-07-25 10:43:30.121282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.121301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.129098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.538 [2024-07-25 10:43:30.129972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.129991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.137835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.538 [2024-07-25 10:43:30.138683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.138703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.146566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.538 [2024-07-25 10:43:30.147420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.147440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.155231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.538 [2024-07-25 10:43:30.156086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-07-25 10:43:30.156109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-07-25 10:43:30.163937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.538 [2024-07-25 10:43:30.164785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-07-25 10:43:30.164805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-07-25 10:43:30.172616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.539 [2024-07-25 10:43:30.173470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-07-25 10:43:30.173489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-07-25 10:43:30.181326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.539 [2024-07-25 10:43:30.182181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-07-25 10:43:30.182200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-07-25 10:43:30.190046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.539 [2024-07-25 10:43:30.190899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-07-25 10:43:30.190919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-07-25 10:43:30.198966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.539 [2024-07-25 10:43:30.199817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-07-25 10:43:30.199836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-07-25 10:43:30.207645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.539 [2024-07-25 10:43:30.208498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-07-25 10:43:30.208518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-07-25 10:43:30.216336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.539 [2024-07-25 10:43:30.217191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-07-25 10:43:30.217210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-07-25 10:43:30.225002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.539 [2024-07-25 10:43:30.225856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-07-25 10:43:30.225875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-07-25 10:43:30.233696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.539 [2024-07-25 10:43:30.234554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-07-25 10:43:30.234573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.242593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.799 [2024-07-25 10:43:30.243481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.243501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.251402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.799 [2024-07-25 10:43:30.252254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.252274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.260092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.799 [2024-07-25 10:43:30.260945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.260964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.268801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.799 [2024-07-25 10:43:30.269648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.269667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.277526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.799 [2024-07-25 10:43:30.278379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.278398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.286293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.799 [2024-07-25 10:43:30.287148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.287168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.295013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.799 [2024-07-25 10:43:30.295865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.295884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.303699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.799 [2024-07-25 10:43:30.304551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.304571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.312411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.799 [2024-07-25 10:43:30.313263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.313283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.321125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.799 [2024-07-25 10:43:30.321977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.321996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.330044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.799 [2024-07-25 10:43:30.330897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.330917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.338766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.799 [2024-07-25 10:43:30.339648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.339668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.347493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.799 [2024-07-25 10:43:30.348377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.348397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.356265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.799 [2024-07-25 10:43:30.357117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.357137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.364993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.799 [2024-07-25 10:43:30.365869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.365888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.373706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.799 [2024-07-25 10:43:30.374573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.374593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.382441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.799 [2024-07-25 10:43:30.383297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.383319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-07-25 10:43:30.391160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.799 [2024-07-25 10:43:30.392013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-07-25 10:43:30.392032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-07-25 10:43:30.399853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.800 [2024-07-25 10:43:30.400699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-07-25 10:43:30.400725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-07-25 10:43:30.408565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.800 [2024-07-25 10:43:30.409417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-07-25 10:43:30.409437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-07-25 10:43:30.417249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.800 [2024-07-25 10:43:30.418102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-07-25 10:43:30.418122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-07-25 10:43:30.425915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.800 [2024-07-25 10:43:30.426760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-07-25 10:43:30.426780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-07-25 10:43:30.434609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.800 [2024-07-25 10:43:30.435462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-07-25 10:43:30.435481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-07-25 10:43:30.443289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.800 [2024-07-25 10:43:30.444143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-07-25 10:43:30.444162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-07-25 10:43:30.452229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.800 [2024-07-25 10:43:30.453081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-07-25 10:43:30.453101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-07-25 10:43:30.460912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.800 [2024-07-25 10:43:30.461760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-07-25 10:43:30.461782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-07-25 10:43:30.469594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:26.800 [2024-07-25 10:43:30.470476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-07-25 10:43:30.470496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-07-25 10:43:30.478358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:26.800 [2024-07-25 10:43:30.479211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-07-25 10:43:30.479231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-07-25 10:43:30.487035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:26.800 [2024-07-25 10:43:30.487887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-07-25 10:43:30.487907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-07-25 10:43:30.495728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:26.800 [2024-07-25 10:43:30.496590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-07-25 10:43:30.496610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.504577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.060 [2024-07-25 10:43:30.505466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.505486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.513315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.060 [2024-07-25 10:43:30.514167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.514186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.522026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.060 [2024-07-25 10:43:30.522882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.522902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.530707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.060 [2024-07-25 10:43:30.531561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.531580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.539419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.060 [2024-07-25 10:43:30.540275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.540295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.548143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.060 [2024-07-25 10:43:30.549038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.549058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.556872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.060 [2024-07-25 10:43:30.557725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.557760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.565613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.060 [2024-07-25 10:43:30.566465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.566485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.574420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.060 [2024-07-25 10:43:30.575274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.575293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.583147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.060 [2024-07-25 10:43:30.583999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.584018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.591973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.060 [2024-07-25 10:43:30.592835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.592854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.600675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.060 [2024-07-25 10:43:30.601531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.601551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.609348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.060 [2024-07-25 10:43:30.610200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.610220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.618075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.060 [2024-07-25 10:43:30.618972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.618992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.626807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.060 [2024-07-25 10:43:30.627653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.627673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.635537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.060 [2024-07-25 10:43:30.636389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.636409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.644329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.060 [2024-07-25 10:43:30.645182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.645202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.652988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.060 [2024-07-25 10:43:30.653836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.653855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.661680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.060 [2024-07-25 10:43:30.662532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.662551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.060 [2024-07-25 10:43:30.670396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.060 [2024-07-25 10:43:30.671285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.060 [2024-07-25 10:43:30.671305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.061 [2024-07-25 10:43:30.679083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.061 [2024-07-25 10:43:30.679937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.061 [2024-07-25 10:43:30.679956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.061 [2024-07-25 10:43:30.687793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.061 [2024-07-25 10:43:30.688643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.061 [2024-07-25 10:43:30.688665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.061 [2024-07-25 10:43:30.696515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.061 [2024-07-25 10:43:30.697369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.061 [2024-07-25 10:43:30.697389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.061 [2024-07-25 10:43:30.705360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.061 [2024-07-25 10:43:30.706212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.061 [2024-07-25 10:43:30.706232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.061 [2024-07-25 10:43:30.714099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.061 [2024-07-25 10:43:30.714950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.061 [2024-07-25 10:43:30.714970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.061 [2024-07-25 10:43:30.722755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.061 [2024-07-25 10:43:30.723602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.061 [2024-07-25 10:43:30.723621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.061 [2024-07-25 10:43:30.731424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.061 [2024-07-25 10:43:30.732308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.061 [2024-07-25 10:43:30.732328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.061 [2024-07-25 10:43:30.740167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.061 [2024-07-25 10:43:30.741020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.061 [2024-07-25 10:43:30.741039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.061 [2024-07-25 10:43:30.748860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.061 [2024-07-25 10:43:30.749706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.061 [2024-07-25 10:43:30.749731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.061 [2024-07-25 10:43:30.757540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.061 [2024-07-25 10:43:30.758429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.061 [2024-07-25 10:43:30.758449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.320 [2024-07-25 10:43:30.766415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.320 [2024-07-25 10:43:30.767315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.320 [2024-07-25 10:43:30.767335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.320 [2024-07-25 10:43:30.775161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.320 [2024-07-25 10:43:30.776026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.776046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.783895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.321 [2024-07-25 10:43:30.784743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.784763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.792600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.321 [2024-07-25 10:43:30.793455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.793474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.801208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.321 [2024-07-25 10:43:30.802076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.802096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.809928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.321 [2024-07-25 10:43:30.810777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.810797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.818608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.321 [2024-07-25 10:43:30.819460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.819480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.827285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.321 [2024-07-25 10:43:30.828142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.828162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.835973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.321 [2024-07-25 10:43:30.836862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.836881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.844678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.321 [2024-07-25 10:43:30.845534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.845557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.853435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.321 [2024-07-25 10:43:30.854291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.854311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.861997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.321 [2024-07-25 10:43:30.862849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.862869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.870613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.321 [2024-07-25 10:43:30.871467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.871487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.879310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.321 [2024-07-25 10:43:30.880164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.880183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.887985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.321 [2024-07-25 10:43:30.888833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.888853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.896724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.321 [2024-07-25 10:43:30.897573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.897592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.905676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.321 [2024-07-25 10:43:30.906529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.906549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.914390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.321 [2024-07-25 10:43:30.915243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.915266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.923095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.321 [2024-07-25 10:43:30.923993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.924012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.931846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.321 [2024-07-25 10:43:30.932690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.932710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.940513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.321 [2024-07-25 10:43:30.941367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.941387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.949283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.321 [2024-07-25 10:43:30.950138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.950158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.958150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.321 [2024-07-25 10:43:30.959019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.959038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.966845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.321 [2024-07-25 10:43:30.967728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.967747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.975605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.321 [2024-07-25 10:43:30.976461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.976480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.984334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.321 [2024-07-25 10:43:30.985186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.985206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:30.993057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.321 [2024-07-25 10:43:30.993918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-07-25 10:43:30.993937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.321 [2024-07-25 10:43:31.001779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.321 [2024-07-25 10:43:31.002660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.322 [2024-07-25 10:43:31.002680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.322 [2024-07-25 10:43:31.010517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.322 [2024-07-25 10:43:31.011372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.322 [2024-07-25 10:43:31.011392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.322 [2024-07-25 10:43:31.019277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.322 [2024-07-25 10:43:31.020166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.322 [2024-07-25 10:43:31.020186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.581 [2024-07-25 10:43:31.028182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.581 [2024-07-25 10:43:31.029072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.581 [2024-07-25 10:43:31.029093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.581 [2024-07-25 10:43:31.036953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.581 [2024-07-25 10:43:31.037843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.581 [2024-07-25 10:43:31.037863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.581 [2024-07-25 10:43:31.045721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.581 [2024-07-25 10:43:31.046572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.046591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.054464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.582 [2024-07-25 10:43:31.055324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.055344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.063182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.582 [2024-07-25 10:43:31.064070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.064089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.072179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.582 [2024-07-25 10:43:31.073073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.073093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.081031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.582 [2024-07-25 10:43:31.081937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.081957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.089784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.582 [2024-07-25 10:43:31.090637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.090658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.098536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.582 [2024-07-25 10:43:31.099394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.099414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.107243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.582 [2024-07-25 10:43:31.108100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.108121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.115978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.582 [2024-07-25 10:43:31.116869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.116888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.124740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.582 [2024-07-25 10:43:31.125589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.125609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.133443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.582 [2024-07-25 10:43:31.134302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.134322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.142222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.582 [2024-07-25 10:43:31.143125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.143147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.150938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.582 [2024-07-25 10:43:31.151792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.151812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.159616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.582 [2024-07-25 10:43:31.160476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.160496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.168364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.582 [2024-07-25 10:43:31.169223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.169242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.177047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.582 [2024-07-25 10:43:31.177912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.177932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.185755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.582 [2024-07-25 10:43:31.186641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.186662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.194503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.582 [2024-07-25 10:43:31.195392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.195412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.203218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.582 [2024-07-25 10:43:31.204100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.204120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.212174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.582 [2024-07-25 10:43:31.213031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.213052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.220906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.582 [2024-07-25 10:43:31.221769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.221790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.229642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.582 [2024-07-25 10:43:31.230494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.230514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.238426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.582 [2024-07-25 10:43:31.239312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.239332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.247141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.582 [2024-07-25 10:43:31.247995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.248014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.255868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.582 [2024-07-25 10:43:31.256725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.256745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.264567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.582 [2024-07-25 10:43:31.265457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.265477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.582 [2024-07-25 10:43:31.273318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.582 [2024-07-25 10:43:31.274175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.582 [2024-07-25 10:43:31.274194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.583 [2024-07-25 10:43:31.282078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.583 [2024-07-25 10:43:31.282971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.583 [2024-07-25 10:43:31.282991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 [2024-07-25 10:43:31.290982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.842 [2024-07-25 10:43:31.291835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.842 [2024-07-25 10:43:31.291855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 [2024-07-25 10:43:31.299689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.842 [2024-07-25 10:43:31.300549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.842 [2024-07-25 10:43:31.300570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 [2024-07-25 10:43:31.308412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.842 [2024-07-25 10:43:31.309273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.842 [2024-07-25 10:43:31.309294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 [2024-07-25 10:43:31.317104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.842 [2024-07-25 10:43:31.317961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.842 [2024-07-25 10:43:31.317980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 [2024-07-25 10:43:31.326063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.842 [2024-07-25 10:43:31.326923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.842 [2024-07-25 10:43:31.326944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 [2024-07-25 10:43:31.334834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.842 [2024-07-25 10:43:31.335674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.842 [2024-07-25 10:43:31.335693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 [2024-07-25 10:43:31.343520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.842 [2024-07-25 10:43:31.344409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.842 [2024-07-25 10:43:31.344429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 [2024-07-25 10:43:31.352268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.842 [2024-07-25 10:43:31.353149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.842 [2024-07-25 10:43:31.353169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 [2024-07-25 10:43:31.361035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.842 [2024-07-25 10:43:31.361895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.842 [2024-07-25 10:43:31.361915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 [2024-07-25 10:43:31.369772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.842 [2024-07-25 10:43:31.370623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.842 [2024-07-25 10:43:31.370646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 [2024-07-25 10:43:31.378490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.842 [2024-07-25 10:43:31.379379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.842 [2024-07-25 10:43:31.379399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 [2024-07-25 10:43:31.387244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ed4e8 00:28:27.842 [2024-07-25 10:43:31.388121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.842 [2024-07-25 10:43:31.388140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 [2024-07-25 10:43:31.396009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190f8e88 00:28:27.842 [2024-07-25 10:43:31.396867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.842 [2024-07-25 10:43:31.396887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 [2024-07-25 10:43:31.404804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190ef270 00:28:27.842 [2024-07-25 10:43:31.405655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.842 [2024-07-25 10:43:31.405674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 [2024-07-25 10:43:31.413509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f4810) with pdu=0x2000190fac10 00:28:27.842 [2024-07-25 10:43:31.414397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.842 [2024-07-25 10:43:31.414418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:27.842 00:28:27.842 Latency(us) 00:28:27.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.842 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:27.842 nvme0n1 : 2.00 29110.62 113.71 0.00 0.00 4391.47 2136.47 10276.04 00:28:27.842 =================================================================================================================== 00:28:27.842 Total : 29110.62 113.71 0.00 0.00 4391.47 2136.47 10276.04 00:28:27.842 0 00:28:27.842 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:27.843 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:27.843 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:27.843 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:27.843 | .driver_specific 00:28:27.843 | .nvme_error 00:28:27.843 | .status_code 00:28:27.843 | .command_transient_transport_error' 00:28:28.102 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 228 > 0 )) 00:28:28.102 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4043749 00:28:28.102 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 4043749 ']' 00:28:28.102 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 4043749 00:28:28.102 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:28.102 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:28.102 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4043749 00:28:28.102 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:28.102 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:28.102 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4043749' 00:28:28.102 killing process with pid 4043749 00:28:28.102 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 4043749 00:28:28.102 Received shutdown signal, test time was about 2.000000 seconds 00:28:28.102 00:28:28.102 Latency(us) 00:28:28.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.102 =================================================================================================================== 00:28:28.102 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:28.102 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 4043749 00:28:28.444 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:28.444 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:28.445 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:28.445 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:28.445 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:28.445 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4044432 00:28:28.445 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4044432 /var/tmp/bperf.sock 00:28:28.445 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:28.445 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 4044432 ']' 00:28:28.445 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:28.445 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:28.445 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:28.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:28.445 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:28.445 10:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:28.445 [2024-07-25 10:43:31.890584] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:28.445 [2024-07-25 10:43:31.890639] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4044432 ] 00:28:28.445 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:28.445 Zero copy mechanism will not be used. 00:28:28.445 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.445 [2024-07-25 10:43:31.958900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.445 [2024-07-25 10:43:32.022337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.012 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:29.012 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:29.012 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:29.012 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:29.270 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:29.270 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.270 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.270 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.270 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.270 10:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.529 nvme0n1 00:28:29.529 10:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:29.529 10:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.529 10:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.529 10:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.529 10:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:29.529 10:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:29.787 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:29.787 Zero copy mechanism will not be used. 00:28:29.787 Running I/O for 2 seconds... 00:28:29.787 [2024-07-25 10:43:33.321883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.787 [2024-07-25 10:43:33.322122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.787 [2024-07-25 10:43:33.322153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.787 [2024-07-25 10:43:33.333927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.787 [2024-07-25 10:43:33.334289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.787 [2024-07-25 10:43:33.334318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.787 [2024-07-25 10:43:33.343092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.787 [2024-07-25 10:43:33.343503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.343527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.350992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.351332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.351354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.362848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.363364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.363385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.378288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.378685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.378707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.387178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.387523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.387543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.395258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.395612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.395634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.401884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.402223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.402245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.408357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.408793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.408815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.415912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.416268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.416288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.424114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.424454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.424474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.430900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.431245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.431266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.438655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.439000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.439021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.446365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.446696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.446721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.454421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.454758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.454778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.461878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.462224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.462245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.470390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.470747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.470768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.478162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.478508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.478528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.788 [2024-07-25 10:43:33.486505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:29.788 [2024-07-25 10:43:33.486867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.788 [2024-07-25 10:43:33.486888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.494477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.494912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.494937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.502799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.503156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.503178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.517038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.517698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.517723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.530599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.530797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.530816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.540005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.540431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.540451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.548560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.548938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.548959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.556913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.557263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.557283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.564655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.565077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.565098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.572663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.573088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.573109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.579991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.580477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.580497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.586974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.587319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.587340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.594224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.594585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.594605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.600931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.601276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.601297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.607120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.607464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.607485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.615290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.615631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.615651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.623711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.624055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.624075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.630864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.630950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.630969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.638124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.638464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.638488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.645436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.645792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.645813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.652534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.652904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.652925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.659754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.660092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.660112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.666991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.667346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.667367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.673837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.674189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.674209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.681130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.681492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.681512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.688870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.689224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.689244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.698518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.698623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.698642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.707017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.707382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.707402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.716532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.716889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.716909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.726673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.727038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.727058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.737195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.737552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.737572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.048 [2024-07-25 10:43:33.747439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.048 [2024-07-25 10:43:33.747802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.048 [2024-07-25 10:43:33.747823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.757034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.757452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.757473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.766467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.766933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.766953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.777414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.777533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.777553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.795477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.795887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.795908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.807981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.808344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.808365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.818037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.818388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.818419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.827854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.828281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.828301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.835680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.835852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.835871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.844776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.845319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.845340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.853264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.853603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.853625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.861095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.861432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.861453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.876772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.877414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.877435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.887156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.887536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.887560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.894388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.894745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.894766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.902073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.902403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.902426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.908943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.909292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.909314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.916144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.916512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.916533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.922992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.923399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.923419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.930297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.930639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.930659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.944430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.944954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.944975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.955894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.956258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.956278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.962949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.963329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.963350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.969646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.969976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-07-25 10:43:33.969996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.307 [2024-07-25 10:43:33.981401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.307 [2024-07-25 10:43:33.982219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-07-25 10:43:33.982239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.308 [2024-07-25 10:43:33.995583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.308 [2024-07-25 10:43:33.996005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-07-25 10:43:33.996026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.308 [2024-07-25 10:43:34.003838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.308 [2024-07-25 10:43:34.004263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-07-25 10:43:34.004283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.567 [2024-07-25 10:43:34.010792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.011187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.011207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.018149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.018488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.018508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.025497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.025849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.025870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.032906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.033223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.033243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.038941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.039330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.039350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.046157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.046507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.046528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.053103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.053439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.053459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.060675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.061004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.061024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.068393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.068725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.068745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.076919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.077239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.077259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.083811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.084188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.084207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.090016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.090371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.090392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.101147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.101857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.101881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.115662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.116194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.116214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.123706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.124072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.124093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.131784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.132160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.132180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.138507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.138937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.138957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.146725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.147277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.147298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.154463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.154670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.154689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.161623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.161986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.162007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.168823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.169173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.169193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.177027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.177363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.177383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.183931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.184297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.184317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.191374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.191849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.191869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.199846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.200204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.200225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.208297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.209004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.209025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.215314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.215652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.215672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.222476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.222810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.222831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.229194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.229534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.229555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.234926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.235279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.235303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.240642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.241032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.241053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.247140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.247570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.247591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.254089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.254438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.254459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.260078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.260465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.260486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.568 [2024-07-25 10:43:34.267043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.568 [2024-07-25 10:43:34.267439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.568 [2024-07-25 10:43:34.267460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.274007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.274401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.274422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.280525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.280933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.280954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.287069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.287460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.287481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.293568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.293985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.294006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.299767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.300122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.300143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.305563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.305900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.305921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.313126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.313462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.313483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.319883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.320295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.320316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.326761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.327131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.327151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.333574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.333931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.333952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.340525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.340877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.340898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.347232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.347564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.347584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.353978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.354338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.354358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.360645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.361010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.361030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.367783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.368102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.368124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.374526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.374861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.374881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.381287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.381623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.381643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.388063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.388406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.388427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.394937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.395269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.395289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.401460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.401790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.401811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.407842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.408194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.408217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.414880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.415257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.415277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.421136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.421472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.421493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.427826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.428152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.428172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.433949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.434297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.434318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.440578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.440985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.441006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.447058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.447450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.447470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.453373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.453736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.453756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.459957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.460288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.460308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.466510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.466882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.466903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.473200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.473558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.473579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.480979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.481402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.481422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.488757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.489144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.489165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.495779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.496169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.496189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.501541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.501898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.828 [2024-07-25 10:43:34.501919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.828 [2024-07-25 10:43:34.508588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.828 [2024-07-25 10:43:34.509065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.829 [2024-07-25 10:43:34.509086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.829 [2024-07-25 10:43:34.516672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.829 [2024-07-25 10:43:34.517129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.829 [2024-07-25 10:43:34.517149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.829 [2024-07-25 10:43:34.524930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:30.829 [2024-07-25 10:43:34.525349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.829 [2024-07-25 10:43:34.525369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.533583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.534004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.534026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.542101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.542544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.542565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.551382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.551818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.551838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.559710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.560147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.560167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.566404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.566771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.566791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.573956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.574284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.574304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.579540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.579896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.579917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.585818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.586158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.586179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.591821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.592151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.592175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.597602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.597996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.598017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.603569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.603947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.603968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.609969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.610303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.610324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.616364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.616733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.616753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.623500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.623849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.623870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.629252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.629577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.629598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.634914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.635250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.635270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.641710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.642119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.642139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.648432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.648774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.088 [2024-07-25 10:43:34.648794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.088 [2024-07-25 10:43:34.655582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.088 [2024-07-25 10:43:34.656002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.656022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.663072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.663439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.663459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.670398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.670769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.670789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.677680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.678028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.678049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.685070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.685416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.685436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.692622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.693055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.693076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.699910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.700236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.700257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.706914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.707243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.707266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.713649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.714043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.714064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.720896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.721227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.721247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.727523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.727866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.727886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.734088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.734428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.734449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.740882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.741271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.741292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.747183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.747502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.747522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.753269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.753602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.753622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.759390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.759738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.759759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.765313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.765706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.765732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.772048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.772379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.772399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.778169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.778489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.778508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.089 [2024-07-25 10:43:34.784751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.089 [2024-07-25 10:43:34.785107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-25 10:43:34.785127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.349 [2024-07-25 10:43:34.791385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.349 [2024-07-25 10:43:34.791805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.349 [2024-07-25 10:43:34.791826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.349 [2024-07-25 10:43:34.798183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.349 [2024-07-25 10:43:34.798535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.349 [2024-07-25 10:43:34.798556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.349 [2024-07-25 10:43:34.804685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.349 [2024-07-25 10:43:34.805036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.349 [2024-07-25 10:43:34.805056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.349 [2024-07-25 10:43:34.810867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.349 [2024-07-25 10:43:34.811208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.349 [2024-07-25 10:43:34.811228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.349 [2024-07-25 10:43:34.817086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.349 [2024-07-25 10:43:34.817422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.349 [2024-07-25 10:43:34.817442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.349 [2024-07-25 10:43:34.823751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.349 [2024-07-25 10:43:34.824075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.349 [2024-07-25 10:43:34.824096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.349 [2024-07-25 10:43:34.829635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.349 [2024-07-25 10:43:34.829975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.349 [2024-07-25 10:43:34.829995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.349 [2024-07-25 10:43:34.835434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.349 [2024-07-25 10:43:34.835801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.349 [2024-07-25 10:43:34.835821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.349 [2024-07-25 10:43:34.842286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.349 [2024-07-25 10:43:34.842672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.349 [2024-07-25 10:43:34.842692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.349 [2024-07-25 10:43:34.848353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.349 [2024-07-25 10:43:34.848753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.349 [2024-07-25 10:43:34.848774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.349 [2024-07-25 10:43:34.854553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.349 [2024-07-25 10:43:34.854905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.349 [2024-07-25 10:43:34.854924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.349 [2024-07-25 10:43:34.861097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.861467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.861488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.867286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.867637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.867657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.873823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.874143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.874168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.879783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.880151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.880172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.885538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.885888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.885909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.891597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.891961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.891982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.897609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.897951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.897972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.904382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.904754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.904775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.910559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.910905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.910924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.916751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.917091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.917111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.922798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.923116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.923136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.929097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.929474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.929494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.935564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.935918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.935938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.941783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.942111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.942132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.947492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.947841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.947861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.953606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.953936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.953957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.959697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.960043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.960064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.965872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.966194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.966215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.971947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.972280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.972301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.350 [2024-07-25 10:43:34.978290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.350 [2024-07-25 10:43:34.978618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.350 [2024-07-25 10:43:34.978638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.351 [2024-07-25 10:43:34.984159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.351 [2024-07-25 10:43:34.984494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.351 [2024-07-25 10:43:34.984515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.351 [2024-07-25 10:43:34.990742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.351 [2024-07-25 10:43:34.991068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.351 [2024-07-25 10:43:34.991088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.351 [2024-07-25 10:43:34.996803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.351 [2024-07-25 10:43:34.997152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.351 [2024-07-25 10:43:34.997173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.351 [2024-07-25 10:43:35.002581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.351 [2024-07-25 10:43:35.002971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.351 [2024-07-25 10:43:35.002991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.351 [2024-07-25 10:43:35.008485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.351 [2024-07-25 10:43:35.008876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.351 [2024-07-25 10:43:35.008896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.351 [2024-07-25 10:43:35.014771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.351 [2024-07-25 10:43:35.015111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.351 [2024-07-25 10:43:35.015131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.351 [2024-07-25 10:43:35.021182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.351 [2024-07-25 10:43:35.021525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.351 [2024-07-25 10:43:35.021544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.351 [2024-07-25 10:43:35.027448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.351 [2024-07-25 10:43:35.027837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.351 [2024-07-25 10:43:35.027858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.351 [2024-07-25 10:43:35.034014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.351 [2024-07-25 10:43:35.034351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.351 [2024-07-25 10:43:35.034375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.351 [2024-07-25 10:43:35.039943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.351 [2024-07-25 10:43:35.040322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.351 [2024-07-25 10:43:35.040343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.351 [2024-07-25 10:43:35.046254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.351 [2024-07-25 10:43:35.046595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.351 [2024-07-25 10:43:35.046615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.611 [2024-07-25 10:43:35.053179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.611 [2024-07-25 10:43:35.053513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.611 [2024-07-25 10:43:35.053534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.611 [2024-07-25 10:43:35.059409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.611 [2024-07-25 10:43:35.059799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.611 [2024-07-25 10:43:35.059819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.611 [2024-07-25 10:43:35.065697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.611 [2024-07-25 10:43:35.066086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.611 [2024-07-25 10:43:35.066106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.611 [2024-07-25 10:43:35.071566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.611 [2024-07-25 10:43:35.071963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.611 [2024-07-25 10:43:35.071983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.611 [2024-07-25 10:43:35.077659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.611 [2024-07-25 10:43:35.078070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.611 [2024-07-25 10:43:35.078090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.611 [2024-07-25 10:43:35.083959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.611 [2024-07-25 10:43:35.084365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.611 [2024-07-25 10:43:35.084386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.611 [2024-07-25 10:43:35.089835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.611 [2024-07-25 10:43:35.090167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.611 [2024-07-25 10:43:35.090187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.611 [2024-07-25 10:43:35.096033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.611 [2024-07-25 10:43:35.096411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.611 [2024-07-25 10:43:35.096432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.611 [2024-07-25 10:43:35.102699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.611 [2024-07-25 10:43:35.103092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.611 [2024-07-25 10:43:35.103113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.611 [2024-07-25 10:43:35.108621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.611 [2024-07-25 10:43:35.109076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.611 [2024-07-25 10:43:35.109096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.611 [2024-07-25 10:43:35.115285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.611 [2024-07-25 10:43:35.115699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.611 [2024-07-25 10:43:35.115725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.611 [2024-07-25 10:43:35.121475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.611 [2024-07-25 10:43:35.121868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.611 [2024-07-25 10:43:35.121888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.127769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.128160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.128180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.135843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.136287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.136307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.144472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.144888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.144908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.152149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.152519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.152539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.159754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.160146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.160167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.167088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.167414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.167435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.174710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.175199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.175219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.183060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.183400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.183420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.191101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.191510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.191530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.198938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.199268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.199289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.205444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.205794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.205815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.211531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.211857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.211881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.217503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.217867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.217888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.223723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.224055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.224075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.230380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.230705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.230732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.236411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.236753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.236773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.242786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.243121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.243141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.248867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.249221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.249241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.255089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.255421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.255441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.261236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.261566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.261586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.267345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.267695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.267722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.273671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.274026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.274045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.279166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.279498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.279518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.285229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.285553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.285572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.291698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.292119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.292140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.298062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.612 [2024-07-25 10:43:35.298394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.612 [2024-07-25 10:43:35.298414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.612 [2024-07-25 10:43:35.303795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10f6490) with pdu=0x2000190fef90 00:28:31.613 [2024-07-25 10:43:35.303919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.613 [2024-07-25 10:43:35.303939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.613 00:28:31.613 Latency(us) 00:28:31.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.613 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:31.613 nvme0n1 : 2.00 4164.20 520.53 0.00 0.00 3836.83 2555.90 18035.51 00:28:31.613 =================================================================================================================== 00:28:31.613 Total : 4164.20 520.53 0.00 0.00 3836.83 2555.90 18035.51 00:28:31.613 0 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:31.872 | .driver_specific 00:28:31.872 | .nvme_error 00:28:31.872 | .status_code 00:28:31.872 | .command_transient_transport_error' 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 269 > 0 )) 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4044432 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 4044432 ']' 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 4044432 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4044432 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4044432' 00:28:31.872 killing process with pid 4044432 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 4044432 00:28:31.872 Received shutdown signal, test time was about 2.000000 seconds 00:28:31.872 00:28:31.872 Latency(us) 00:28:31.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.872 =================================================================================================================== 00:28:31.872 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:31.872 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 4044432 00:28:32.131 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 4042272 00:28:32.131 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 4042272 ']' 00:28:32.131 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 4042272 00:28:32.131 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:32.131 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:32.131 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4042272 00:28:32.131 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:32.131 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:32.131 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4042272' 00:28:32.131 killing process with pid 4042272 00:28:32.131 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 4042272 00:28:32.131 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 4042272 00:28:32.390 00:28:32.390 real 0m16.454s 00:28:32.390 user 0m31.015s 00:28:32.390 sys 0m4.829s 00:28:32.390 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:32.390 10:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.390 ************************************ 00:28:32.390 END TEST nvmf_digest_error 00:28:32.390 ************************************ 00:28:32.390 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:32.390 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:32.390 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:32.390 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:32.390 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:32.390 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:32.390 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:32.390 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:32.390 rmmod nvme_tcp 00:28:32.390 rmmod nvme_fabrics 00:28:32.390 rmmod nvme_keyring 00:28:32.390 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:32.390 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:32.391 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:32.391 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 4042272 ']' 00:28:32.391 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 4042272 00:28:32.391 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 4042272 ']' 00:28:32.391 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 4042272 00:28:32.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (4042272) - No such process 00:28:32.391 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 4042272 is not found' 00:28:32.391 Process with pid 4042272 is not found 00:28:32.391 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:32.391 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:32.391 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:32.391 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:32.391 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:32.391 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.391 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.391 10:43:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:34.927 00:28:34.927 real 0m42.859s 00:28:34.927 user 1m5.027s 00:28:34.927 sys 0m15.252s 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:34.927 ************************************ 00:28:34.927 END TEST nvmf_digest 00:28:34.927 ************************************ 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.927 ************************************ 00:28:34.927 START TEST nvmf_bdevperf 00:28:34.927 ************************************ 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:34.927 * Looking for test storage... 00:28:34.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:34.927 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:34.928 10:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:41.619 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:41.619 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:41.619 Found net devices under 0000:af:00.0: cvl_0_0 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.619 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:41.620 Found net devices under 0000:af:00.1: cvl_0_1 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:41.620 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:41.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:28:41.879 00:28:41.879 --- 10.0.0.2 ping statistics --- 00:28:41.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.879 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:28:41.879 00:28:41.879 --- 10.0.0.1 ping statistics --- 00:28:41.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.879 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=4048676 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 4048676 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 4048676 ']' 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:41.879 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.879 [2024-07-25 10:43:45.513247] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:41.879 [2024-07-25 10:43:45.513300] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.879 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.138 [2024-07-25 10:43:45.588569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:42.138 [2024-07-25 10:43:45.663621] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.138 [2024-07-25 10:43:45.663659] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.138 [2024-07-25 10:43:45.663668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.138 [2024-07-25 10:43:45.663677] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.138 [2024-07-25 10:43:45.663701] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.138 [2024-07-25 10:43:45.663819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.138 [2024-07-25 10:43:45.663910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.138 [2024-07-25 10:43:45.663912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.706 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:42.706 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:42.706 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:42.706 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:42.706 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.706 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.706 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:42.706 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.706 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.706 [2024-07-25 10:43:46.379393] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.706 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.706 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:42.706 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.706 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.965 Malloc0 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.965 [2024-07-25 10:43:46.440029] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:42.965 { 00:28:42.965 "params": { 00:28:42.965 "name": "Nvme$subsystem", 00:28:42.965 "trtype": "$TEST_TRANSPORT", 00:28:42.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.965 "adrfam": "ipv4", 00:28:42.965 "trsvcid": "$NVMF_PORT", 00:28:42.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.965 "hdgst": ${hdgst:-false}, 00:28:42.965 "ddgst": ${ddgst:-false} 00:28:42.965 }, 00:28:42.965 "method": "bdev_nvme_attach_controller" 00:28:42.965 } 00:28:42.965 EOF 00:28:42.965 )") 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:42.965 10:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:42.965 "params": { 00:28:42.965 "name": "Nvme1", 00:28:42.965 "trtype": "tcp", 00:28:42.965 "traddr": "10.0.0.2", 00:28:42.965 "adrfam": "ipv4", 00:28:42.965 "trsvcid": "4420", 00:28:42.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:42.965 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:42.965 "hdgst": false, 00:28:42.965 "ddgst": false 00:28:42.965 }, 00:28:42.965 "method": "bdev_nvme_attach_controller" 00:28:42.965 }' 00:28:42.965 [2024-07-25 10:43:46.490703] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:42.965 [2024-07-25 10:43:46.490756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4048951 ] 00:28:42.965 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.965 [2024-07-25 10:43:46.561419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.965 [2024-07-25 10:43:46.630817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.224 Running I/O for 1 seconds... 00:28:44.161 00:28:44.161 Latency(us) 00:28:44.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.161 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:44.161 Verification LBA range: start 0x0 length 0x4000 00:28:44.161 Nvme1n1 : 1.01 10977.92 42.88 0.00 0.00 11619.19 2097.15 50331.65 00:28:44.161 =================================================================================================================== 00:28:44.161 Total : 10977.92 42.88 0.00 0.00 11619.19 2097.15 50331.65 00:28:44.420 10:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=4049215 00:28:44.420 10:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:44.420 10:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:44.420 10:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:44.420 10:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:44.420 10:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:44.420 10:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:44.420 10:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:44.420 { 00:28:44.420 "params": { 00:28:44.420 "name": "Nvme$subsystem", 00:28:44.420 "trtype": "$TEST_TRANSPORT", 00:28:44.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.420 "adrfam": "ipv4", 00:28:44.420 "trsvcid": "$NVMF_PORT", 00:28:44.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.420 "hdgst": ${hdgst:-false}, 00:28:44.420 "ddgst": ${ddgst:-false} 00:28:44.420 }, 00:28:44.420 "method": "bdev_nvme_attach_controller" 00:28:44.420 } 00:28:44.420 EOF 00:28:44.420 )") 00:28:44.420 10:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:44.420 10:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:44.420 10:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:44.420 10:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:44.420 "params": { 00:28:44.420 "name": "Nvme1", 00:28:44.420 "trtype": "tcp", 00:28:44.420 "traddr": "10.0.0.2", 00:28:44.420 "adrfam": "ipv4", 00:28:44.420 "trsvcid": "4420", 00:28:44.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:44.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:44.420 "hdgst": false, 00:28:44.420 "ddgst": false 00:28:44.420 }, 00:28:44.420 "method": "bdev_nvme_attach_controller" 00:28:44.420 }' 00:28:44.420 [2024-07-25 10:43:48.033114] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:44.420 [2024-07-25 10:43:48.033167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4049215 ] 00:28:44.420 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.420 [2024-07-25 10:43:48.103694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.679 [2024-07-25 10:43:48.170049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.679 Running I/O for 15 seconds... 00:28:47.972 10:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 4048676 00:28:47.972 10:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:47.972 [2024-07-25 10:43:51.001197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.972 [2024-07-25 10:43:51.001630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.972 [2024-07-25 10:43:51.001641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.001650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.001661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.001670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.001680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.001690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.001700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.001709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.001860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.001870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.001882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.001892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.001903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.001912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.001922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.001932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.001942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.001951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.001961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.001970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.001981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.001990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.973 [2024-07-25 10:43:51.002457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.973 [2024-07-25 10:43:51.002466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.002983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.002993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.003003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.003013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.003022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.003033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.974 [2024-07-25 10:43:51.003042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.003052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.003062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.003072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.003081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.003091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.003100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.003112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.003121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.003133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.003142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.003152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.003161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.003172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.003181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.003191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.974 [2024-07-25 10:43:51.003200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.003211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.974 [2024-07-25 10:43:51.003219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.003230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.974 [2024-07-25 10:43:51.003239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.974 [2024-07-25 10:43:51.003249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.975 [2024-07-25 10:43:51.003258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.975 [2024-07-25 10:43:51.003277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.975 [2024-07-25 10:43:51.003297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.975 [2024-07-25 10:43:51.003316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.975 [2024-07-25 10:43:51.003336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.975 [2024-07-25 10:43:51.003939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.975 [2024-07-25 10:43:51.003949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ac8c0 is same with the state(5) to be set 00:28:47.975 [2024-07-25 10:43:51.003961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:47.976 [2024-07-25 10:43:51.003968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:47.976 [2024-07-25 10:43:51.003976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119176 len:8 PRP1 0x0 PRP2 0x0 00:28:47.976 [2024-07-25 10:43:51.003986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.976 [2024-07-25 10:43:51.004030] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21ac8c0 was disconnected and freed. reset controller. 00:28:47.976 [2024-07-25 10:43:51.006744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.976 [2024-07-25 10:43:51.006795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.976 [2024-07-25 10:43:51.007401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.976 [2024-07-25 10:43:51.007419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.976 [2024-07-25 10:43:51.007429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.976 [2024-07-25 10:43:51.007600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.976 [2024-07-25 10:43:51.007777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.976 [2024-07-25 10:43:51.007787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.976 [2024-07-25 10:43:51.007797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.976 [2024-07-25 10:43:51.010464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.976 [2024-07-25 10:43:51.019842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.976 [2024-07-25 10:43:51.020390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.976 [2024-07-25 10:43:51.020445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.976 [2024-07-25 10:43:51.020478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.976 [2024-07-25 10:43:51.020968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.976 [2024-07-25 10:43:51.021134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.976 [2024-07-25 10:43:51.021147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.976 [2024-07-25 10:43:51.021156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.976 [2024-07-25 10:43:51.023687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.976 [2024-07-25 10:43:51.032614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.976 [2024-07-25 10:43:51.033167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.976 [2024-07-25 10:43:51.033219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.976 [2024-07-25 10:43:51.033251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.976 [2024-07-25 10:43:51.033670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.976 [2024-07-25 10:43:51.033842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.976 [2024-07-25 10:43:51.033852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.976 [2024-07-25 10:43:51.033861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.976 [2024-07-25 10:43:51.036392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.976 [2024-07-25 10:43:51.045337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.976 [2024-07-25 10:43:51.045864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.976 [2024-07-25 10:43:51.045917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.976 [2024-07-25 10:43:51.045949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.976 [2024-07-25 10:43:51.046540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.976 [2024-07-25 10:43:51.046817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.976 [2024-07-25 10:43:51.046828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.976 [2024-07-25 10:43:51.046837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.976 [2024-07-25 10:43:51.049369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.976 [2024-07-25 10:43:51.058121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.976 [2024-07-25 10:43:51.058657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.976 [2024-07-25 10:43:51.058708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.976 [2024-07-25 10:43:51.058756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.976 [2024-07-25 10:43:51.059346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.976 [2024-07-25 10:43:51.059944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.976 [2024-07-25 10:43:51.059954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.976 [2024-07-25 10:43:51.059963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.976 [2024-07-25 10:43:51.062495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.976 [2024-07-25 10:43:51.070863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.976 [2024-07-25 10:43:51.071381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.976 [2024-07-25 10:43:51.071434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.976 [2024-07-25 10:43:51.071467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.976 [2024-07-25 10:43:51.071962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.976 [2024-07-25 10:43:51.072128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.976 [2024-07-25 10:43:51.072138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.976 [2024-07-25 10:43:51.072147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.976 [2024-07-25 10:43:51.074682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.976 [2024-07-25 10:43:51.083609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.976 [2024-07-25 10:43:51.084155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.976 [2024-07-25 10:43:51.084207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.976 [2024-07-25 10:43:51.084239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.976 [2024-07-25 10:43:51.084851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.976 [2024-07-25 10:43:51.085017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.976 [2024-07-25 10:43:51.085027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.976 [2024-07-25 10:43:51.085036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.976 [2024-07-25 10:43:51.087569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.976 [2024-07-25 10:43:51.096362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.976 [2024-07-25 10:43:51.096860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.976 [2024-07-25 10:43:51.096878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.976 [2024-07-25 10:43:51.096886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.976 [2024-07-25 10:43:51.097042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.976 [2024-07-25 10:43:51.097199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.976 [2024-07-25 10:43:51.097209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.976 [2024-07-25 10:43:51.097217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.976 [2024-07-25 10:43:51.099744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.976 [2024-07-25 10:43:51.109103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.976 [2024-07-25 10:43:51.109618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.976 [2024-07-25 10:43:51.109670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.976 [2024-07-25 10:43:51.109701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.976 [2024-07-25 10:43:51.110202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.976 [2024-07-25 10:43:51.110368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.976 [2024-07-25 10:43:51.110379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.976 [2024-07-25 10:43:51.110387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.976 [2024-07-25 10:43:51.112923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.976 [2024-07-25 10:43:51.121776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.976 [2024-07-25 10:43:51.122290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.976 [2024-07-25 10:43:51.122307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.976 [2024-07-25 10:43:51.122315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.976 [2024-07-25 10:43:51.122472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.977 [2024-07-25 10:43:51.122628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.977 [2024-07-25 10:43:51.122638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.977 [2024-07-25 10:43:51.122646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.977 [2024-07-25 10:43:51.125197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.977 [2024-07-25 10:43:51.134418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.977 [2024-07-25 10:43:51.134937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.977 [2024-07-25 10:43:51.134989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.977 [2024-07-25 10:43:51.135020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.977 [2024-07-25 10:43:51.135612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.977 [2024-07-25 10:43:51.136154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.977 [2024-07-25 10:43:51.136165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.977 [2024-07-25 10:43:51.136173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.977 [2024-07-25 10:43:51.138705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.977 [2024-07-25 10:43:51.147195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.977 [2024-07-25 10:43:51.147719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.977 [2024-07-25 10:43:51.147736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.977 [2024-07-25 10:43:51.147762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.977 [2024-07-25 10:43:51.147927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.977 [2024-07-25 10:43:51.148093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.977 [2024-07-25 10:43:51.148103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.977 [2024-07-25 10:43:51.148115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.977 [2024-07-25 10:43:51.150650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.977 [2024-07-25 10:43:51.159874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.977 [2024-07-25 10:43:51.160382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.977 [2024-07-25 10:43:51.160399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.977 [2024-07-25 10:43:51.160408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.977 [2024-07-25 10:43:51.160574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.977 [2024-07-25 10:43:51.160755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.977 [2024-07-25 10:43:51.160765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.977 [2024-07-25 10:43:51.160774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.977 [2024-07-25 10:43:51.163294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.977 [2024-07-25 10:43:51.172610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.977 [2024-07-25 10:43:51.173160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.977 [2024-07-25 10:43:51.173214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.977 [2024-07-25 10:43:51.173246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.977 [2024-07-25 10:43:51.173852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.977 [2024-07-25 10:43:51.174285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.977 [2024-07-25 10:43:51.174296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.977 [2024-07-25 10:43:51.174305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.977 [2024-07-25 10:43:51.176842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.977 [2024-07-25 10:43:51.185292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.977 [2024-07-25 10:43:51.185806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.977 [2024-07-25 10:43:51.185823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.977 [2024-07-25 10:43:51.185832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.977 [2024-07-25 10:43:51.185990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.977 [2024-07-25 10:43:51.186146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.977 [2024-07-25 10:43:51.186155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.977 [2024-07-25 10:43:51.186163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.977 [2024-07-25 10:43:51.188681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.977 [2024-07-25 10:43:51.197961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.977 [2024-07-25 10:43:51.198469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.977 [2024-07-25 10:43:51.198519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.977 [2024-07-25 10:43:51.198551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.977 [2024-07-25 10:43:51.199047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.977 [2024-07-25 10:43:51.199213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.977 [2024-07-25 10:43:51.199223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.977 [2024-07-25 10:43:51.199232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.977 [2024-07-25 10:43:51.201785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.977 [2024-07-25 10:43:51.210621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.977 [2024-07-25 10:43:51.211151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.977 [2024-07-25 10:43:51.211169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.977 [2024-07-25 10:43:51.211178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.977 [2024-07-25 10:43:51.211342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.977 [2024-07-25 10:43:51.211508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.977 [2024-07-25 10:43:51.211518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.977 [2024-07-25 10:43:51.211527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.977 [2024-07-25 10:43:51.214069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.977 [2024-07-25 10:43:51.223290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.977 [2024-07-25 10:43:51.223813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.977 [2024-07-25 10:43:51.223864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.977 [2024-07-25 10:43:51.223896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.978 [2024-07-25 10:43:51.224372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.978 [2024-07-25 10:43:51.224529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.978 [2024-07-25 10:43:51.224538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.978 [2024-07-25 10:43:51.224547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.978 [2024-07-25 10:43:51.227093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.978 [2024-07-25 10:43:51.236034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.978 [2024-07-25 10:43:51.236473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.978 [2024-07-25 10:43:51.236490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.978 [2024-07-25 10:43:51.236499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.978 [2024-07-25 10:43:51.236658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.978 [2024-07-25 10:43:51.236842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.978 [2024-07-25 10:43:51.236853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.978 [2024-07-25 10:43:51.236861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.978 [2024-07-25 10:43:51.239393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.978 [2024-07-25 10:43:51.248772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.978 [2024-07-25 10:43:51.249214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.978 [2024-07-25 10:43:51.249231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.978 [2024-07-25 10:43:51.249239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.978 [2024-07-25 10:43:51.249395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.978 [2024-07-25 10:43:51.249552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.978 [2024-07-25 10:43:51.249561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.978 [2024-07-25 10:43:51.249569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.978 [2024-07-25 10:43:51.252117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.978 [2024-07-25 10:43:51.261670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.978 [2024-07-25 10:43:51.262209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.978 [2024-07-25 10:43:51.262261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.978 [2024-07-25 10:43:51.262293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.978 [2024-07-25 10:43:51.262734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.978 [2024-07-25 10:43:51.262905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.978 [2024-07-25 10:43:51.262915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.978 [2024-07-25 10:43:51.262924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.978 [2024-07-25 10:43:51.265587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.978 [2024-07-25 10:43:51.274637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.978 [2024-07-25 10:43:51.275200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.978 [2024-07-25 10:43:51.275218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.978 [2024-07-25 10:43:51.275228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.978 [2024-07-25 10:43:51.275397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.978 [2024-07-25 10:43:51.275566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.978 [2024-07-25 10:43:51.275577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.978 [2024-07-25 10:43:51.275588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.978 [2024-07-25 10:43:51.278140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.978 [2024-07-25 10:43:51.287380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.978 [2024-07-25 10:43:51.287927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.978 [2024-07-25 10:43:51.287943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.978 [2024-07-25 10:43:51.287953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.978 [2024-07-25 10:43:51.288109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.978 [2024-07-25 10:43:51.288266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.978 [2024-07-25 10:43:51.288275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.978 [2024-07-25 10:43:51.288283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.978 [2024-07-25 10:43:51.290809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.978 [2024-07-25 10:43:51.300043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.978 [2024-07-25 10:43:51.300429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.978 [2024-07-25 10:43:51.300488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.978 [2024-07-25 10:43:51.300521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.978 [2024-07-25 10:43:51.301130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.978 [2024-07-25 10:43:51.301553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.978 [2024-07-25 10:43:51.301563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.978 [2024-07-25 10:43:51.301572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.978 [2024-07-25 10:43:51.305106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.978 [2024-07-25 10:43:51.313405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.978 [2024-07-25 10:43:51.313915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.978 [2024-07-25 10:43:51.313933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.978 [2024-07-25 10:43:51.313942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.978 [2024-07-25 10:43:51.314108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.978 [2024-07-25 10:43:51.314274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.978 [2024-07-25 10:43:51.314284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.978 [2024-07-25 10:43:51.314292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.978 [2024-07-25 10:43:51.316833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.978 [2024-07-25 10:43:51.326130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.978 [2024-07-25 10:43:51.326587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.978 [2024-07-25 10:43:51.326656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.978 [2024-07-25 10:43:51.326689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.978 [2024-07-25 10:43:51.327292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.978 [2024-07-25 10:43:51.327738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.978 [2024-07-25 10:43:51.327748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.978 [2024-07-25 10:43:51.327757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.978 [2024-07-25 10:43:51.330294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.978 [2024-07-25 10:43:51.338810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.978 [2024-07-25 10:43:51.339347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.978 [2024-07-25 10:43:51.339365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.978 [2024-07-25 10:43:51.339374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.978 [2024-07-25 10:43:51.339539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.978 [2024-07-25 10:43:51.339703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.978 [2024-07-25 10:43:51.339726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.978 [2024-07-25 10:43:51.339736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.978 [2024-07-25 10:43:51.342260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.978 [2024-07-25 10:43:51.351545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.978 [2024-07-25 10:43:51.352029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.978 [2024-07-25 10:43:51.352047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.979 [2024-07-25 10:43:51.352056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.979 [2024-07-25 10:43:51.352221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.979 [2024-07-25 10:43:51.352387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.979 [2024-07-25 10:43:51.352396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.979 [2024-07-25 10:43:51.352405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.979 [2024-07-25 10:43:51.354946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.979 [2024-07-25 10:43:51.364227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.979 [2024-07-25 10:43:51.364575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.979 [2024-07-25 10:43:51.364592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.979 [2024-07-25 10:43:51.364601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.979 [2024-07-25 10:43:51.364781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.979 [2024-07-25 10:43:51.364948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.979 [2024-07-25 10:43:51.364958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.979 [2024-07-25 10:43:51.364967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.979 [2024-07-25 10:43:51.367508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.979 [2024-07-25 10:43:51.376907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.979 [2024-07-25 10:43:51.377419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.979 [2024-07-25 10:43:51.377437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.979 [2024-07-25 10:43:51.377446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.979 [2024-07-25 10:43:51.377603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.979 [2024-07-25 10:43:51.377783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.979 [2024-07-25 10:43:51.377794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.979 [2024-07-25 10:43:51.377802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.979 [2024-07-25 10:43:51.380336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.979 [2024-07-25 10:43:51.389559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.979 [2024-07-25 10:43:51.390018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.979 [2024-07-25 10:43:51.390038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.979 [2024-07-25 10:43:51.390048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.979 [2024-07-25 10:43:51.390215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.979 [2024-07-25 10:43:51.390380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.979 [2024-07-25 10:43:51.390390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.979 [2024-07-25 10:43:51.390399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.979 [2024-07-25 10:43:51.392938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.979 [2024-07-25 10:43:51.402219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.979 [2024-07-25 10:43:51.402571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.979 [2024-07-25 10:43:51.402588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.979 [2024-07-25 10:43:51.402597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.979 [2024-07-25 10:43:51.402775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.979 [2024-07-25 10:43:51.402941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.979 [2024-07-25 10:43:51.402951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.979 [2024-07-25 10:43:51.402959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.979 [2024-07-25 10:43:51.405496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.979 [2024-07-25 10:43:51.414870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.979 [2024-07-25 10:43:51.415381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.979 [2024-07-25 10:43:51.415398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.979 [2024-07-25 10:43:51.415407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.979 [2024-07-25 10:43:51.415564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.979 [2024-07-25 10:43:51.415726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.979 [2024-07-25 10:43:51.415736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.979 [2024-07-25 10:43:51.415761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.979 [2024-07-25 10:43:51.418295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.979 [2024-07-25 10:43:51.427573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.979 [2024-07-25 10:43:51.427877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.979 [2024-07-25 10:43:51.427894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.979 [2024-07-25 10:43:51.427903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.979 [2024-07-25 10:43:51.428067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.979 [2024-07-25 10:43:51.428232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.979 [2024-07-25 10:43:51.428242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.979 [2024-07-25 10:43:51.428251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.979 [2024-07-25 10:43:51.430791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.979 [2024-07-25 10:43:51.440228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.979 [2024-07-25 10:43:51.440659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.979 [2024-07-25 10:43:51.440675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.979 [2024-07-25 10:43:51.440684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.979 [2024-07-25 10:43:51.440868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.979 [2024-07-25 10:43:51.441033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.979 [2024-07-25 10:43:51.441043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.979 [2024-07-25 10:43:51.441052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.979 [2024-07-25 10:43:51.443586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.979 [2024-07-25 10:43:51.452951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.979 [2024-07-25 10:43:51.453462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.979 [2024-07-25 10:43:51.453479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.979 [2024-07-25 10:43:51.453490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.979 [2024-07-25 10:43:51.453647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.979 [2024-07-25 10:43:51.453829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.979 [2024-07-25 10:43:51.453839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.979 [2024-07-25 10:43:51.453848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.979 [2024-07-25 10:43:51.456381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.979 [2024-07-25 10:43:51.465604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.979 [2024-07-25 10:43:51.466148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.979 [2024-07-25 10:43:51.466199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.979 [2024-07-25 10:43:51.466231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.979 [2024-07-25 10:43:51.466735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.979 [2024-07-25 10:43:51.466901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.979 [2024-07-25 10:43:51.466911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.979 [2024-07-25 10:43:51.466919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.979 [2024-07-25 10:43:51.469451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.979 [2024-07-25 10:43:51.478332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.979 [2024-07-25 10:43:51.478765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.979 [2024-07-25 10:43:51.478782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.979 [2024-07-25 10:43:51.478791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.980 [2024-07-25 10:43:51.478947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.980 [2024-07-25 10:43:51.479104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.980 [2024-07-25 10:43:51.479114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.980 [2024-07-25 10:43:51.479122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.980 [2024-07-25 10:43:51.481635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.980 [2024-07-25 10:43:51.491018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.980 [2024-07-25 10:43:51.491525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.980 [2024-07-25 10:43:51.491541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.980 [2024-07-25 10:43:51.491550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.980 [2024-07-25 10:43:51.491706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.980 [2024-07-25 10:43:51.491890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.980 [2024-07-25 10:43:51.491904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.980 [2024-07-25 10:43:51.491912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.980 [2024-07-25 10:43:51.494446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.980 [2024-07-25 10:43:51.503781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.980 [2024-07-25 10:43:51.504330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.980 [2024-07-25 10:43:51.504347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.980 [2024-07-25 10:43:51.504356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.980 [2024-07-25 10:43:51.504512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.980 [2024-07-25 10:43:51.504668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.980 [2024-07-25 10:43:51.504677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.980 [2024-07-25 10:43:51.504685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.980 [2024-07-25 10:43:51.507367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.980 [2024-07-25 10:43:51.516499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.980 [2024-07-25 10:43:51.517016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.980 [2024-07-25 10:43:51.517033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.980 [2024-07-25 10:43:51.517042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.980 [2024-07-25 10:43:51.517207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.980 [2024-07-25 10:43:51.517372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.980 [2024-07-25 10:43:51.517382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.980 [2024-07-25 10:43:51.517390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.980 [2024-07-25 10:43:51.519929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.980 [2024-07-25 10:43:51.529164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.980 [2024-07-25 10:43:51.529623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.980 [2024-07-25 10:43:51.529640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.980 [2024-07-25 10:43:51.529649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.980 [2024-07-25 10:43:51.529821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.980 [2024-07-25 10:43:51.529986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.980 [2024-07-25 10:43:51.529996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.980 [2024-07-25 10:43:51.530005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.980 [2024-07-25 10:43:51.532531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.980 [2024-07-25 10:43:51.541916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.980 [2024-07-25 10:43:51.542439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.980 [2024-07-25 10:43:51.542489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.980 [2024-07-25 10:43:51.542521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.980 [2024-07-25 10:43:51.542925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.980 [2024-07-25 10:43:51.543083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.980 [2024-07-25 10:43:51.543093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.980 [2024-07-25 10:43:51.543101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.980 [2024-07-25 10:43:51.545557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.980 [2024-07-25 10:43:51.554630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.980 [2024-07-25 10:43:51.555085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.980 [2024-07-25 10:43:51.555102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.980 [2024-07-25 10:43:51.555111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.980 [2024-07-25 10:43:51.555276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.980 [2024-07-25 10:43:51.555440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.980 [2024-07-25 10:43:51.555450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.980 [2024-07-25 10:43:51.555459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.980 [2024-07-25 10:43:51.557998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.980 [2024-07-25 10:43:51.567283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.980 [2024-07-25 10:43:51.567776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.980 [2024-07-25 10:43:51.567829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.980 [2024-07-25 10:43:51.567861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.980 [2024-07-25 10:43:51.568239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.980 [2024-07-25 10:43:51.568396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.980 [2024-07-25 10:43:51.568406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.980 [2024-07-25 10:43:51.568414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.980 [2024-07-25 10:43:51.570946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.980 [2024-07-25 10:43:51.580026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.980 [2024-07-25 10:43:51.580530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.980 [2024-07-25 10:43:51.580546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.980 [2024-07-25 10:43:51.580555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.980 [2024-07-25 10:43:51.580721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.980 [2024-07-25 10:43:51.580900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.980 [2024-07-25 10:43:51.580910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.980 [2024-07-25 10:43:51.580919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.980 [2024-07-25 10:43:51.583474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.980 [2024-07-25 10:43:51.592788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.980 [2024-07-25 10:43:51.593302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.980 [2024-07-25 10:43:51.593354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.980 [2024-07-25 10:43:51.593386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.981 [2024-07-25 10:43:51.593994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.981 [2024-07-25 10:43:51.594396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.981 [2024-07-25 10:43:51.594406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.981 [2024-07-25 10:43:51.594415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.981 [2024-07-25 10:43:51.596951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.981 [2024-07-25 10:43:51.605447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.981 [2024-07-25 10:43:51.605976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.981 [2024-07-25 10:43:51.605993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.981 [2024-07-25 10:43:51.606002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.981 [2024-07-25 10:43:51.606157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.981 [2024-07-25 10:43:51.606313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.981 [2024-07-25 10:43:51.606322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.981 [2024-07-25 10:43:51.606330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.981 [2024-07-25 10:43:51.608861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.981 [2024-07-25 10:43:51.618093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.981 [2024-07-25 10:43:51.618619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.981 [2024-07-25 10:43:51.618637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.981 [2024-07-25 10:43:51.618646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.981 [2024-07-25 10:43:51.618818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.981 [2024-07-25 10:43:51.618983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.981 [2024-07-25 10:43:51.618993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.981 [2024-07-25 10:43:51.619007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.981 [2024-07-25 10:43:51.621542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.981 [2024-07-25 10:43:51.630778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.981 [2024-07-25 10:43:51.631282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.981 [2024-07-25 10:43:51.631299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.981 [2024-07-25 10:43:51.631308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.981 [2024-07-25 10:43:51.631464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.981 [2024-07-25 10:43:51.631620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.981 [2024-07-25 10:43:51.631629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.981 [2024-07-25 10:43:51.631638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.981 [2024-07-25 10:43:51.634187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.981 [2024-07-25 10:43:51.643421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.981 [2024-07-25 10:43:51.643931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.981 [2024-07-25 10:43:51.643947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.981 [2024-07-25 10:43:51.643956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.981 [2024-07-25 10:43:51.644112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.981 [2024-07-25 10:43:51.644269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.981 [2024-07-25 10:43:51.644279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.981 [2024-07-25 10:43:51.644287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.981 [2024-07-25 10:43:51.646819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.981 [2024-07-25 10:43:51.656104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.981 [2024-07-25 10:43:51.656576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.981 [2024-07-25 10:43:51.656626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.981 [2024-07-25 10:43:51.656658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.981 [2024-07-25 10:43:51.657264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.981 [2024-07-25 10:43:51.657566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.981 [2024-07-25 10:43:51.657576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.981 [2024-07-25 10:43:51.657584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.981 [2024-07-25 10:43:51.660121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.981 [2024-07-25 10:43:51.668879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.981 [2024-07-25 10:43:51.669383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.981 [2024-07-25 10:43:51.669400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:47.981 [2024-07-25 10:43:51.669409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:47.981 [2024-07-25 10:43:51.669574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:47.981 [2024-07-25 10:43:51.669746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.981 [2024-07-25 10:43:51.669757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.981 [2024-07-25 10:43:51.669766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.242 [2024-07-25 10:43:51.672388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.242 [2024-07-25 10:43:51.681633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.242 [2024-07-25 10:43:51.682175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.243 [2024-07-25 10:43:51.682227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.243 [2024-07-25 10:43:51.682258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.243 [2024-07-25 10:43:51.682688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.243 [2024-07-25 10:43:51.682859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.243 [2024-07-25 10:43:51.682869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.243 [2024-07-25 10:43:51.682878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.243 [2024-07-25 10:43:51.685412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.243 [2024-07-25 10:43:51.694314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.243 [2024-07-25 10:43:51.694810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.243 [2024-07-25 10:43:51.694827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.243 [2024-07-25 10:43:51.694835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.243 [2024-07-25 10:43:51.694992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.243 [2024-07-25 10:43:51.695147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.243 [2024-07-25 10:43:51.695157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.243 [2024-07-25 10:43:51.695165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.243 [2024-07-25 10:43:51.697691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.243 [2024-07-25 10:43:51.707009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.243 [2024-07-25 10:43:51.707516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.243 [2024-07-25 10:43:51.707556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.243 [2024-07-25 10:43:51.707588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.243 [2024-07-25 10:43:51.708195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.243 [2024-07-25 10:43:51.708459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.243 [2024-07-25 10:43:51.708470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.243 [2024-07-25 10:43:51.708478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.243 [2024-07-25 10:43:51.711014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.243 [2024-07-25 10:43:51.719660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.243 [2024-07-25 10:43:51.720172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.243 [2024-07-25 10:43:51.720189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.243 [2024-07-25 10:43:51.720198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.243 [2024-07-25 10:43:51.720353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.243 [2024-07-25 10:43:51.720509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.243 [2024-07-25 10:43:51.720519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.243 [2024-07-25 10:43:51.720527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.243 [2024-07-25 10:43:51.723071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.243 [2024-07-25 10:43:51.732352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.243 [2024-07-25 10:43:51.732860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.243 [2024-07-25 10:43:51.732877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.243 [2024-07-25 10:43:51.732886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.243 [2024-07-25 10:43:51.733042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.243 [2024-07-25 10:43:51.733199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.243 [2024-07-25 10:43:51.733208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.243 [2024-07-25 10:43:51.733216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.243 [2024-07-25 10:43:51.735744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.243 [2024-07-25 10:43:51.745125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.243 [2024-07-25 10:43:51.745643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.243 [2024-07-25 10:43:51.745695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.243 [2024-07-25 10:43:51.745743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.243 [2024-07-25 10:43:51.746317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.243 [2024-07-25 10:43:51.746483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.243 [2024-07-25 10:43:51.746493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.243 [2024-07-25 10:43:51.746501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.243 [2024-07-25 10:43:51.750227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.243 [2024-07-25 10:43:51.758328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.243 [2024-07-25 10:43:51.758782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.243 [2024-07-25 10:43:51.758799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.243 [2024-07-25 10:43:51.758808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.243 [2024-07-25 10:43:51.758966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.243 [2024-07-25 10:43:51.759122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.243 [2024-07-25 10:43:51.759131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.243 [2024-07-25 10:43:51.759140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.243 [2024-07-25 10:43:51.761809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.243 [2024-07-25 10:43:51.771069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.243 [2024-07-25 10:43:51.771602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.243 [2024-07-25 10:43:51.771619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.243 [2024-07-25 10:43:51.771628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.243 [2024-07-25 10:43:51.771800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.243 [2024-07-25 10:43:51.771965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.243 [2024-07-25 10:43:51.771975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.243 [2024-07-25 10:43:51.771984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.243 [2024-07-25 10:43:51.774511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.243 [2024-07-25 10:43:51.783744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.243 [2024-07-25 10:43:51.784288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.243 [2024-07-25 10:43:51.784339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.243 [2024-07-25 10:43:51.784372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.243 [2024-07-25 10:43:51.784840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.243 [2024-07-25 10:43:51.785005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.243 [2024-07-25 10:43:51.785015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.243 [2024-07-25 10:43:51.785024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.243 [2024-07-25 10:43:51.787556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.243 [2024-07-25 10:43:51.796492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.243 [2024-07-25 10:43:51.797014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.243 [2024-07-25 10:43:51.797075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.243 [2024-07-25 10:43:51.797107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.243 [2024-07-25 10:43:51.797577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.243 [2024-07-25 10:43:51.797755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.243 [2024-07-25 10:43:51.797772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.243 [2024-07-25 10:43:51.797781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.243 [2024-07-25 10:43:51.800314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.243 [2024-07-25 10:43:51.809253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.243 [2024-07-25 10:43:51.809762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.243 [2024-07-25 10:43:51.809802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.244 [2024-07-25 10:43:51.809834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.244 [2024-07-25 10:43:51.810422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.244 [2024-07-25 10:43:51.811006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.244 [2024-07-25 10:43:51.811016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.244 [2024-07-25 10:43:51.811025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.244 [2024-07-25 10:43:51.813557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.244 [2024-07-25 10:43:51.821916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.244 [2024-07-25 10:43:51.822337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.244 [2024-07-25 10:43:51.822354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.244 [2024-07-25 10:43:51.822363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.244 [2024-07-25 10:43:51.822519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.244 [2024-07-25 10:43:51.822675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.244 [2024-07-25 10:43:51.822685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.244 [2024-07-25 10:43:51.822693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.244 [2024-07-25 10:43:51.825246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.244 [2024-07-25 10:43:51.834646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.244 [2024-07-25 10:43:51.835153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.244 [2024-07-25 10:43:51.835206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.244 [2024-07-25 10:43:51.835238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.244 [2024-07-25 10:43:51.835704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.244 [2024-07-25 10:43:51.835879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.244 [2024-07-25 10:43:51.835890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.244 [2024-07-25 10:43:51.835898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.244 [2024-07-25 10:43:51.838430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.244 [2024-07-25 10:43:51.847424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.244 [2024-07-25 10:43:51.847944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.244 [2024-07-25 10:43:51.847995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.244 [2024-07-25 10:43:51.848027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.244 [2024-07-25 10:43:51.848598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.244 [2024-07-25 10:43:51.848778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.244 [2024-07-25 10:43:51.848793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.244 [2024-07-25 10:43:51.848802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.244 [2024-07-25 10:43:51.851334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.244 [2024-07-25 10:43:51.860127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.244 [2024-07-25 10:43:51.860607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.244 [2024-07-25 10:43:51.860652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.244 [2024-07-25 10:43:51.860685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.244 [2024-07-25 10:43:51.861287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.244 [2024-07-25 10:43:51.861890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.244 [2024-07-25 10:43:51.861924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.244 [2024-07-25 10:43:51.861957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.244 [2024-07-25 10:43:51.864487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.244 [2024-07-25 10:43:51.872853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.244 [2024-07-25 10:43:51.873370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.244 [2024-07-25 10:43:51.873422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.244 [2024-07-25 10:43:51.873453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.244 [2024-07-25 10:43:51.873963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.244 [2024-07-25 10:43:51.874129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.244 [2024-07-25 10:43:51.874139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.244 [2024-07-25 10:43:51.874148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.244 [2024-07-25 10:43:51.876681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.244 [2024-07-25 10:43:51.885558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.244 [2024-07-25 10:43:51.886108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.244 [2024-07-25 10:43:51.886161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.244 [2024-07-25 10:43:51.886192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.244 [2024-07-25 10:43:51.886797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.244 [2024-07-25 10:43:51.887214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.244 [2024-07-25 10:43:51.887224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.244 [2024-07-25 10:43:51.887233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.244 [2024-07-25 10:43:51.889767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.244 [2024-07-25 10:43:51.898303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.244 [2024-07-25 10:43:51.898750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.244 [2024-07-25 10:43:51.898767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.244 [2024-07-25 10:43:51.898776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.244 [2024-07-25 10:43:51.898934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.244 [2024-07-25 10:43:51.899091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.244 [2024-07-25 10:43:51.899100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.244 [2024-07-25 10:43:51.899109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.244 [2024-07-25 10:43:51.901628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.244 [2024-07-25 10:43:51.910998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.244 [2024-07-25 10:43:51.911504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.244 [2024-07-25 10:43:51.911521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.244 [2024-07-25 10:43:51.911531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.244 [2024-07-25 10:43:51.911696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.244 [2024-07-25 10:43:51.911867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.244 [2024-07-25 10:43:51.911878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.244 [2024-07-25 10:43:51.911886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.244 [2024-07-25 10:43:51.914411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.244 [2024-07-25 10:43:51.923646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.244 [2024-07-25 10:43:51.924171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.244 [2024-07-25 10:43:51.924223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.244 [2024-07-25 10:43:51.924263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.244 [2024-07-25 10:43:51.924818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.244 [2024-07-25 10:43:51.924984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.244 [2024-07-25 10:43:51.924994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.244 [2024-07-25 10:43:51.925003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.244 [2024-07-25 10:43:51.927531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.244 [2024-07-25 10:43:51.936325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.244 [2024-07-25 10:43:51.936866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.244 [2024-07-25 10:43:51.936917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.244 [2024-07-25 10:43:51.936949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.245 [2024-07-25 10:43:51.937332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.245 [2024-07-25 10:43:51.937490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.245 [2024-07-25 10:43:51.937500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.245 [2024-07-25 10:43:51.937509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.245 [2024-07-25 10:43:51.940084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.505 [2024-07-25 10:43:51.949026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.505 [2024-07-25 10:43:51.949474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.505 [2024-07-25 10:43:51.949491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.505 [2024-07-25 10:43:51.949501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.505 [2024-07-25 10:43:51.949666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.505 [2024-07-25 10:43:51.949837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.505 [2024-07-25 10:43:51.949848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.505 [2024-07-25 10:43:51.949857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.505 [2024-07-25 10:43:51.952456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.505 [2024-07-25 10:43:51.961729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.505 [2024-07-25 10:43:51.962219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.505 [2024-07-25 10:43:51.962237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.505 [2024-07-25 10:43:51.962247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.505 [2024-07-25 10:43:51.962411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.505 [2024-07-25 10:43:51.962578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.505 [2024-07-25 10:43:51.962591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.505 [2024-07-25 10:43:51.962600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.505 [2024-07-25 10:43:51.965202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.505 [2024-07-25 10:43:51.974555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.505 [2024-07-25 10:43:51.975017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.505 [2024-07-25 10:43:51.975035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.505 [2024-07-25 10:43:51.975045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.505 [2024-07-25 10:43:51.975223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.505 [2024-07-25 10:43:51.975387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.506 [2024-07-25 10:43:51.975397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.506 [2024-07-25 10:43:51.975406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.506 [2024-07-25 10:43:51.978007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.506 [2024-07-25 10:43:51.987355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.506 [2024-07-25 10:43:51.987851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.506 [2024-07-25 10:43:51.987869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.506 [2024-07-25 10:43:51.987879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.506 [2024-07-25 10:43:51.988044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.506 [2024-07-25 10:43:51.988209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.506 [2024-07-25 10:43:51.988221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.506 [2024-07-25 10:43:51.988231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.506 [2024-07-25 10:43:51.990771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.506 [2024-07-25 10:43:52.000105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.506 [2024-07-25 10:43:52.000568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.506 [2024-07-25 10:43:52.000620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.506 [2024-07-25 10:43:52.000652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.506 [2024-07-25 10:43:52.001100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.506 [2024-07-25 10:43:52.001267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.506 [2024-07-25 10:43:52.001277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.506 [2024-07-25 10:43:52.001286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.506 [2024-07-25 10:43:52.003889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.506 [2024-07-25 10:43:52.012936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.506 [2024-07-25 10:43:52.013451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.506 [2024-07-25 10:43:52.013468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.506 [2024-07-25 10:43:52.013477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.506 [2024-07-25 10:43:52.013646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.506 [2024-07-25 10:43:52.013823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.506 [2024-07-25 10:43:52.013834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.506 [2024-07-25 10:43:52.013843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.506 [2024-07-25 10:43:52.016516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.506 [2024-07-25 10:43:52.025774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.506 [2024-07-25 10:43:52.026296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.506 [2024-07-25 10:43:52.026314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.506 [2024-07-25 10:43:52.026323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.506 [2024-07-25 10:43:52.026487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.506 [2024-07-25 10:43:52.026651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.506 [2024-07-25 10:43:52.026661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.506 [2024-07-25 10:43:52.026670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.506 [2024-07-25 10:43:52.029268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.506 [2024-07-25 10:43:52.038576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.506 [2024-07-25 10:43:52.039110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.506 [2024-07-25 10:43:52.039164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.506 [2024-07-25 10:43:52.039197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.506 [2024-07-25 10:43:52.039558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.506 [2024-07-25 10:43:52.039730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.506 [2024-07-25 10:43:52.039741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.506 [2024-07-25 10:43:52.039749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.506 [2024-07-25 10:43:52.042291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.506 [2024-07-25 10:43:52.051244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.506 [2024-07-25 10:43:52.051784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.506 [2024-07-25 10:43:52.051838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.506 [2024-07-25 10:43:52.051870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.506 [2024-07-25 10:43:52.052466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.506 [2024-07-25 10:43:52.052664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.506 [2024-07-25 10:43:52.052674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.506 [2024-07-25 10:43:52.052682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.506 [2024-07-25 10:43:52.055221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.506 [2024-07-25 10:43:52.064031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.506 [2024-07-25 10:43:52.064535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.506 [2024-07-25 10:43:52.064587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.506 [2024-07-25 10:43:52.064619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.506 [2024-07-25 10:43:52.065221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.506 [2024-07-25 10:43:52.065535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.506 [2024-07-25 10:43:52.065546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.506 [2024-07-25 10:43:52.065555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.506 [2024-07-25 10:43:52.068099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.506 [2024-07-25 10:43:52.076758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.506 [2024-07-25 10:43:52.077244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.506 [2024-07-25 10:43:52.077296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.506 [2024-07-25 10:43:52.077327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.506 [2024-07-25 10:43:52.077932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.506 [2024-07-25 10:43:52.078376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.506 [2024-07-25 10:43:52.078386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.506 [2024-07-25 10:43:52.078394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.506 [2024-07-25 10:43:52.080932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.506 [2024-07-25 10:43:52.089438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.506 [2024-07-25 10:43:52.089907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.506 [2024-07-25 10:43:52.089925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.506 [2024-07-25 10:43:52.089934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.506 [2024-07-25 10:43:52.090100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.506 [2024-07-25 10:43:52.090264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.506 [2024-07-25 10:43:52.090275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.506 [2024-07-25 10:43:52.090287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.506 [2024-07-25 10:43:52.092823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.506 [2024-07-25 10:43:52.102207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.506 [2024-07-25 10:43:52.102720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.506 [2024-07-25 10:43:52.102737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.506 [2024-07-25 10:43:52.102763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.506 [2024-07-25 10:43:52.102927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.506 [2024-07-25 10:43:52.103092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.507 [2024-07-25 10:43:52.103102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.507 [2024-07-25 10:43:52.103110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.507 [2024-07-25 10:43:52.105642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.507 [2024-07-25 10:43:52.114876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.507 [2024-07-25 10:43:52.115410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.507 [2024-07-25 10:43:52.115461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.507 [2024-07-25 10:43:52.115493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.507 [2024-07-25 10:43:52.115811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.507 [2024-07-25 10:43:52.115977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.507 [2024-07-25 10:43:52.115987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.507 [2024-07-25 10:43:52.115996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.507 [2024-07-25 10:43:52.118526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.507 [2024-07-25 10:43:52.127617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.507 [2024-07-25 10:43:52.128090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.507 [2024-07-25 10:43:52.128142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.507 [2024-07-25 10:43:52.128174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.507 [2024-07-25 10:43:52.128585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.507 [2024-07-25 10:43:52.128758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.507 [2024-07-25 10:43:52.128769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.507 [2024-07-25 10:43:52.128778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.507 [2024-07-25 10:43:52.131303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.507 [2024-07-25 10:43:52.140394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.507 [2024-07-25 10:43:52.140890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.507 [2024-07-25 10:43:52.140941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.507 [2024-07-25 10:43:52.140974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.507 [2024-07-25 10:43:52.141559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.507 [2024-07-25 10:43:52.141738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.507 [2024-07-25 10:43:52.141749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.507 [2024-07-25 10:43:52.141757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.507 [2024-07-25 10:43:52.144293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.507 [2024-07-25 10:43:52.153103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.507 [2024-07-25 10:43:52.153621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.507 [2024-07-25 10:43:52.153638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.507 [2024-07-25 10:43:52.153648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.507 [2024-07-25 10:43:52.153819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.507 [2024-07-25 10:43:52.153985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.507 [2024-07-25 10:43:52.153995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.507 [2024-07-25 10:43:52.154003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.507 [2024-07-25 10:43:52.156540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.507 [2024-07-25 10:43:52.165778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.507 [2024-07-25 10:43:52.166250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.507 [2024-07-25 10:43:52.166301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.507 [2024-07-25 10:43:52.166334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.507 [2024-07-25 10:43:52.166836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.507 [2024-07-25 10:43:52.167002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.507 [2024-07-25 10:43:52.167013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.507 [2024-07-25 10:43:52.167021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.507 [2024-07-25 10:43:52.169564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.507 [2024-07-25 10:43:52.178520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.507 [2024-07-25 10:43:52.178981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.507 [2024-07-25 10:43:52.179034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.507 [2024-07-25 10:43:52.179066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.507 [2024-07-25 10:43:52.179494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.507 [2024-07-25 10:43:52.179662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.507 [2024-07-25 10:43:52.179672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.507 [2024-07-25 10:43:52.179681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.507 [2024-07-25 10:43:52.182221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.507 [2024-07-25 10:43:52.191308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.507 [2024-07-25 10:43:52.191815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.507 [2024-07-25 10:43:52.191833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.507 [2024-07-25 10:43:52.191842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.507 [2024-07-25 10:43:52.192008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.507 [2024-07-25 10:43:52.192173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.507 [2024-07-25 10:43:52.192183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.507 [2024-07-25 10:43:52.192192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.507 [2024-07-25 10:43:52.194732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.507 [2024-07-25 10:43:52.204088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.507 [2024-07-25 10:43:52.204536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.507 [2024-07-25 10:43:52.204587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.507 [2024-07-25 10:43:52.204619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.507 [2024-07-25 10:43:52.205048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.507 [2024-07-25 10:43:52.205214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.507 [2024-07-25 10:43:52.205224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.507 [2024-07-25 10:43:52.205233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.768 [2024-07-25 10:43:52.207853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.768 [2024-07-25 10:43:52.216908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.768 [2024-07-25 10:43:52.217348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.768 [2024-07-25 10:43:52.217366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.768 [2024-07-25 10:43:52.217375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.768 [2024-07-25 10:43:52.217542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.768 [2024-07-25 10:43:52.217707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.768 [2024-07-25 10:43:52.217722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.768 [2024-07-25 10:43:52.217732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.768 [2024-07-25 10:43:52.220269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.768 [2024-07-25 10:43:52.229657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.768 [2024-07-25 10:43:52.230167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.768 [2024-07-25 10:43:52.230185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.768 [2024-07-25 10:43:52.230194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.768 [2024-07-25 10:43:52.230360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.768 [2024-07-25 10:43:52.230525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.768 [2024-07-25 10:43:52.230535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.768 [2024-07-25 10:43:52.230544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.768 [2024-07-25 10:43:52.233083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.768 [2024-07-25 10:43:52.242333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.768 [2024-07-25 10:43:52.242865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.768 [2024-07-25 10:43:52.242883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.768 [2024-07-25 10:43:52.242892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.768 [2024-07-25 10:43:52.243058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.768 [2024-07-25 10:43:52.243223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.768 [2024-07-25 10:43:52.243233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.768 [2024-07-25 10:43:52.243242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.768 [2024-07-25 10:43:52.245779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.768 [2024-07-25 10:43:52.255085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.768 [2024-07-25 10:43:52.255638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.768 [2024-07-25 10:43:52.255690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.768 [2024-07-25 10:43:52.255735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.768 [2024-07-25 10:43:52.256323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.768 [2024-07-25 10:43:52.256847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.768 [2024-07-25 10:43:52.256857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.768 [2024-07-25 10:43:52.256866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.768 [2024-07-25 10:43:52.259433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.768 [2024-07-25 10:43:52.267784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.768 [2024-07-25 10:43:52.268245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.768 [2024-07-25 10:43:52.268262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.768 [2024-07-25 10:43:52.268278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.768 [2024-07-25 10:43:52.268450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.768 [2024-07-25 10:43:52.268615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.768 [2024-07-25 10:43:52.268625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.768 [2024-07-25 10:43:52.268633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.768 [2024-07-25 10:43:52.271303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.768 [2024-07-25 10:43:52.280628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.768 [2024-07-25 10:43:52.281070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.768 [2024-07-25 10:43:52.281088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.768 [2024-07-25 10:43:52.281098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.768 [2024-07-25 10:43:52.281263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.768 [2024-07-25 10:43:52.281428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.768 [2024-07-25 10:43:52.281439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.768 [2024-07-25 10:43:52.281449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.768 [2024-07-25 10:43:52.284051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.768 [2024-07-25 10:43:52.293553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.768 [2024-07-25 10:43:52.294062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.768 [2024-07-25 10:43:52.294115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.768 [2024-07-25 10:43:52.294147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.768 [2024-07-25 10:43:52.294738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.768 [2024-07-25 10:43:52.294904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.768 [2024-07-25 10:43:52.294915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.768 [2024-07-25 10:43:52.294923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.768 [2024-07-25 10:43:52.297490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.768 [2024-07-25 10:43:52.306277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.769 [2024-07-25 10:43:52.306786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.769 [2024-07-25 10:43:52.306803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.769 [2024-07-25 10:43:52.306813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.769 [2024-07-25 10:43:52.306978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.769 [2024-07-25 10:43:52.307146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.769 [2024-07-25 10:43:52.307156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.769 [2024-07-25 10:43:52.307164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.769 [2024-07-25 10:43:52.309698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.769 [2024-07-25 10:43:52.318948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.769 [2024-07-25 10:43:52.319390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.769 [2024-07-25 10:43:52.319408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.769 [2024-07-25 10:43:52.319417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.769 [2024-07-25 10:43:52.319583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.769 [2024-07-25 10:43:52.319753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.769 [2024-07-25 10:43:52.319764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.769 [2024-07-25 10:43:52.319773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.769 [2024-07-25 10:43:52.322302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.769 [2024-07-25 10:43:52.331730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.769 [2024-07-25 10:43:52.332119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.769 [2024-07-25 10:43:52.332138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.769 [2024-07-25 10:43:52.332147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.769 [2024-07-25 10:43:52.332312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.769 [2024-07-25 10:43:52.332476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.769 [2024-07-25 10:43:52.332486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.769 [2024-07-25 10:43:52.332495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.769 [2024-07-25 10:43:52.335036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.769 [2024-07-25 10:43:52.344426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.769 [2024-07-25 10:43:52.344954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.769 [2024-07-25 10:43:52.345007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.769 [2024-07-25 10:43:52.345039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.769 [2024-07-25 10:43:52.345405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.769 [2024-07-25 10:43:52.345571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.769 [2024-07-25 10:43:52.345581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.769 [2024-07-25 10:43:52.345590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.769 [2024-07-25 10:43:52.348134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.769 [2024-07-25 10:43:52.357088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.769 [2024-07-25 10:43:52.357612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.769 [2024-07-25 10:43:52.357630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.769 [2024-07-25 10:43:52.357639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.769 [2024-07-25 10:43:52.357810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.769 [2024-07-25 10:43:52.357976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.769 [2024-07-25 10:43:52.357986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.769 [2024-07-25 10:43:52.357994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.769 [2024-07-25 10:43:52.360527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.769 [2024-07-25 10:43:52.369767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.769 [2024-07-25 10:43:52.370242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.769 [2024-07-25 10:43:52.370295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.769 [2024-07-25 10:43:52.370327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.769 [2024-07-25 10:43:52.370930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.769 [2024-07-25 10:43:52.371242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.769 [2024-07-25 10:43:52.371252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.769 [2024-07-25 10:43:52.371261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.769 [2024-07-25 10:43:52.373800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.769 [2024-07-25 10:43:52.382442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.769 [2024-07-25 10:43:52.382971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.769 [2024-07-25 10:43:52.382989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.769 [2024-07-25 10:43:52.382998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.769 [2024-07-25 10:43:52.383155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.769 [2024-07-25 10:43:52.383311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.769 [2024-07-25 10:43:52.383320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.769 [2024-07-25 10:43:52.383328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.769 [2024-07-25 10:43:52.385854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.769 [2024-07-25 10:43:52.395088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.769 [2024-07-25 10:43:52.395614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.769 [2024-07-25 10:43:52.395666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.769 [2024-07-25 10:43:52.395705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.769 [2024-07-25 10:43:52.396172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.769 [2024-07-25 10:43:52.396337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.769 [2024-07-25 10:43:52.396347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.769 [2024-07-25 10:43:52.396356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.769 [2024-07-25 10:43:52.398886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.769 [2024-07-25 10:43:52.407871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.769 [2024-07-25 10:43:52.408333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.769 [2024-07-25 10:43:52.408350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.769 [2024-07-25 10:43:52.408359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.769 [2024-07-25 10:43:52.408524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.769 [2024-07-25 10:43:52.408689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.769 [2024-07-25 10:43:52.408699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.769 [2024-07-25 10:43:52.408708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.769 [2024-07-25 10:43:52.411248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.769 [2024-07-25 10:43:52.420641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.769 [2024-07-25 10:43:52.421175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.769 [2024-07-25 10:43:52.421228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.769 [2024-07-25 10:43:52.421259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.769 [2024-07-25 10:43:52.421862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.769 [2024-07-25 10:43:52.422337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.769 [2024-07-25 10:43:52.422348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.769 [2024-07-25 10:43:52.422357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.769 [2024-07-25 10:43:52.424853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.769 [2024-07-25 10:43:52.433356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.769 [2024-07-25 10:43:52.433878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.770 [2024-07-25 10:43:52.433930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.770 [2024-07-25 10:43:52.433963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.770 [2024-07-25 10:43:52.434146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.770 [2024-07-25 10:43:52.434311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.770 [2024-07-25 10:43:52.434324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.770 [2024-07-25 10:43:52.434333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.770 [2024-07-25 10:43:52.436873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.770 [2024-07-25 10:43:52.446124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.770 [2024-07-25 10:43:52.446670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.770 [2024-07-25 10:43:52.446688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.770 [2024-07-25 10:43:52.446697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.770 [2024-07-25 10:43:52.446866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.770 [2024-07-25 10:43:52.447032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.770 [2024-07-25 10:43:52.447042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.770 [2024-07-25 10:43:52.447050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.770 [2024-07-25 10:43:52.449589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.770 [2024-07-25 10:43:52.458833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.770 [2024-07-25 10:43:52.459340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.770 [2024-07-25 10:43:52.459358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:48.770 [2024-07-25 10:43:52.459366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:48.770 [2024-07-25 10:43:52.459532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:48.770 [2024-07-25 10:43:52.459697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.770 [2024-07-25 10:43:52.459706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.770 [2024-07-25 10:43:52.459721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.770 [2024-07-25 10:43:52.462255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.030 [2024-07-25 10:43:52.471760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.030 [2024-07-25 10:43:52.472234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.030 [2024-07-25 10:43:52.472285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.030 [2024-07-25 10:43:52.472318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.030 [2024-07-25 10:43:52.472924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.030 [2024-07-25 10:43:52.473527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.030 [2024-07-25 10:43:52.473541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.030 [2024-07-25 10:43:52.473553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.030 [2024-07-25 10:43:52.477299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.030 [2024-07-25 10:43:52.485124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.030 [2024-07-25 10:43:52.485663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.030 [2024-07-25 10:43:52.485731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.030 [2024-07-25 10:43:52.485764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.030 [2024-07-25 10:43:52.486352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.030 [2024-07-25 10:43:52.486683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.030 [2024-07-25 10:43:52.486693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.030 [2024-07-25 10:43:52.486702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.030 [2024-07-25 10:43:52.489242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.030 [2024-07-25 10:43:52.497907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.030 [2024-07-25 10:43:52.498424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.030 [2024-07-25 10:43:52.498474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.030 [2024-07-25 10:43:52.498506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.031 [2024-07-25 10:43:52.499111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.031 [2024-07-25 10:43:52.499628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.031 [2024-07-25 10:43:52.499639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.031 [2024-07-25 10:43:52.499647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.031 [2024-07-25 10:43:52.502226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.031 [2024-07-25 10:43:52.510574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.031 [2024-07-25 10:43:52.511071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.031 [2024-07-25 10:43:52.511124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.031 [2024-07-25 10:43:52.511156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.031 [2024-07-25 10:43:52.511620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.031 [2024-07-25 10:43:52.511793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.031 [2024-07-25 10:43:52.511803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.031 [2024-07-25 10:43:52.511812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.031 [2024-07-25 10:43:52.514342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.031 [2024-07-25 10:43:52.523293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.031 [2024-07-25 10:43:52.523828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.031 [2024-07-25 10:43:52.523846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.031 [2024-07-25 10:43:52.523855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.031 [2024-07-25 10:43:52.524023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.031 [2024-07-25 10:43:52.524188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.031 [2024-07-25 10:43:52.524198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.031 [2024-07-25 10:43:52.524206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.031 [2024-07-25 10:43:52.526879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.031 [2024-07-25 10:43:52.536182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.031 [2024-07-25 10:43:52.536699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.031 [2024-07-25 10:43:52.536764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.031 [2024-07-25 10:43:52.536797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.031 [2024-07-25 10:43:52.537386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.031 [2024-07-25 10:43:52.537762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.031 [2024-07-25 10:43:52.537773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.031 [2024-07-25 10:43:52.537782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.031 [2024-07-25 10:43:52.540375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.031 [2024-07-25 10:43:52.549126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.031 [2024-07-25 10:43:52.549557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.031 [2024-07-25 10:43:52.549573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.031 [2024-07-25 10:43:52.549583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.031 [2024-07-25 10:43:52.549755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.031 [2024-07-25 10:43:52.549919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.031 [2024-07-25 10:43:52.549930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.031 [2024-07-25 10:43:52.549938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.031 [2024-07-25 10:43:52.552539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.031 [2024-07-25 10:43:52.561981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.031 [2024-07-25 10:43:52.562444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.031 [2024-07-25 10:43:52.562462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.031 [2024-07-25 10:43:52.562471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.031 [2024-07-25 10:43:52.562635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.031 [2024-07-25 10:43:52.562806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.031 [2024-07-25 10:43:52.562816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.031 [2024-07-25 10:43:52.562828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.031 [2024-07-25 10:43:52.565360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.031 [2024-07-25 10:43:52.574741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.031 [2024-07-25 10:43:52.575265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.031 [2024-07-25 10:43:52.575316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.031 [2024-07-25 10:43:52.575348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.031 [2024-07-25 10:43:52.575821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.031 [2024-07-25 10:43:52.575987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.031 [2024-07-25 10:43:52.575997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.031 [2024-07-25 10:43:52.576006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.031 [2024-07-25 10:43:52.578538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.031 [2024-07-25 10:43:52.587477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.031 [2024-07-25 10:43:52.587961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.031 [2024-07-25 10:43:52.587978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.031 [2024-07-25 10:43:52.587986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.031 [2024-07-25 10:43:52.588142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.031 [2024-07-25 10:43:52.588298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.031 [2024-07-25 10:43:52.588308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.031 [2024-07-25 10:43:52.588316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.031 [2024-07-25 10:43:52.590849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.031 [2024-07-25 10:43:52.600218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.031 [2024-07-25 10:43:52.600702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.031 [2024-07-25 10:43:52.600778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.031 [2024-07-25 10:43:52.600810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.031 [2024-07-25 10:43:52.601402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.031 [2024-07-25 10:43:52.601900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.031 [2024-07-25 10:43:52.601910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.031 [2024-07-25 10:43:52.601919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.031 [2024-07-25 10:43:52.604451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.031 [2024-07-25 10:43:52.612947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.031 [2024-07-25 10:43:52.613453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.031 [2024-07-25 10:43:52.613474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.031 [2024-07-25 10:43:52.613483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.031 [2024-07-25 10:43:52.613648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.031 [2024-07-25 10:43:52.613820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.031 [2024-07-25 10:43:52.613831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.031 [2024-07-25 10:43:52.613839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.031 [2024-07-25 10:43:52.616368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.031 [2024-07-25 10:43:52.625623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.031 [2024-07-25 10:43:52.626070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.031 [2024-07-25 10:43:52.626088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.031 [2024-07-25 10:43:52.626098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.031 [2024-07-25 10:43:52.626263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.032 [2024-07-25 10:43:52.626428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.032 [2024-07-25 10:43:52.626439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.032 [2024-07-25 10:43:52.626447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.032 [2024-07-25 10:43:52.628984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.032 [2024-07-25 10:43:52.638354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.032 [2024-07-25 10:43:52.638805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.032 [2024-07-25 10:43:52.638858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.032 [2024-07-25 10:43:52.638890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.032 [2024-07-25 10:43:52.639479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.032 [2024-07-25 10:43:52.639699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.032 [2024-07-25 10:43:52.639709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.032 [2024-07-25 10:43:52.639723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.032 [2024-07-25 10:43:52.642256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.032 [2024-07-25 10:43:52.651056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.032 [2024-07-25 10:43:52.651559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.032 [2024-07-25 10:43:52.651576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.032 [2024-07-25 10:43:52.651586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.032 [2024-07-25 10:43:52.651758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.032 [2024-07-25 10:43:52.651927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.032 [2024-07-25 10:43:52.651937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.032 [2024-07-25 10:43:52.651946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.032 [2024-07-25 10:43:52.654478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.032 [2024-07-25 10:43:52.663786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.032 [2024-07-25 10:43:52.664308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.032 [2024-07-25 10:43:52.664359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.032 [2024-07-25 10:43:52.664391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.032 [2024-07-25 10:43:52.664909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.032 [2024-07-25 10:43:52.665076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.032 [2024-07-25 10:43:52.665086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.032 [2024-07-25 10:43:52.665095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.032 [2024-07-25 10:43:52.667624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.032 [2024-07-25 10:43:52.676547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.032 [2024-07-25 10:43:52.677093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.032 [2024-07-25 10:43:52.677145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.032 [2024-07-25 10:43:52.677178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.032 [2024-07-25 10:43:52.677739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.032 [2024-07-25 10:43:52.677905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.032 [2024-07-25 10:43:52.677915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.032 [2024-07-25 10:43:52.677924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.032 [2024-07-25 10:43:52.680455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.032 [2024-07-25 10:43:52.689250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.032 [2024-07-25 10:43:52.689751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.032 [2024-07-25 10:43:52.689802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.032 [2024-07-25 10:43:52.689834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.032 [2024-07-25 10:43:52.690423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.032 [2024-07-25 10:43:52.690927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.032 [2024-07-25 10:43:52.690938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.032 [2024-07-25 10:43:52.690947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.032 [2024-07-25 10:43:52.693485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.032 [2024-07-25 10:43:52.701981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.032 [2024-07-25 10:43:52.702491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.032 [2024-07-25 10:43:52.702508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.032 [2024-07-25 10:43:52.702517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.032 [2024-07-25 10:43:52.702682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.032 [2024-07-25 10:43:52.702854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.032 [2024-07-25 10:43:52.702864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.032 [2024-07-25 10:43:52.702873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.032 [2024-07-25 10:43:52.705409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.032 [2024-07-25 10:43:52.714640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.032 [2024-07-25 10:43:52.715149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.032 [2024-07-25 10:43:52.715166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.032 [2024-07-25 10:43:52.715176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.032 [2024-07-25 10:43:52.715340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.032 [2024-07-25 10:43:52.715505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.032 [2024-07-25 10:43:52.715515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.032 [2024-07-25 10:43:52.715524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.032 [2024-07-25 10:43:52.718061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.032 [2024-07-25 10:43:52.727385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.032 [2024-07-25 10:43:52.727830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.032 [2024-07-25 10:43:52.727848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.032 [2024-07-25 10:43:52.727857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.032 [2024-07-25 10:43:52.728027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.032 [2024-07-25 10:43:52.728197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.032 [2024-07-25 10:43:52.728208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.032 [2024-07-25 10:43:52.728216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.032 [2024-07-25 10:43:52.730861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.293 [2024-07-25 10:43:52.740191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.293 [2024-07-25 10:43:52.740708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.293 [2024-07-25 10:43:52.740772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.293 [2024-07-25 10:43:52.740811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.293 [2024-07-25 10:43:52.741271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.293 [2024-07-25 10:43:52.741437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.293 [2024-07-25 10:43:52.741447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.293 [2024-07-25 10:43:52.741456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.293 [2024-07-25 10:43:52.743999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.293 [2024-07-25 10:43:52.752921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.293 [2024-07-25 10:43:52.753433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.293 [2024-07-25 10:43:52.753450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.293 [2024-07-25 10:43:52.753459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.293 [2024-07-25 10:43:52.753625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.293 [2024-07-25 10:43:52.753796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.293 [2024-07-25 10:43:52.753807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.293 [2024-07-25 10:43:52.753815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.293 [2024-07-25 10:43:52.756348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.293 [2024-07-25 10:43:52.765570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.293 [2024-07-25 10:43:52.766095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.293 [2024-07-25 10:43:52.766147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.293 [2024-07-25 10:43:52.766179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.293 [2024-07-25 10:43:52.766627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.293 [2024-07-25 10:43:52.766871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.293 [2024-07-25 10:43:52.766886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.293 [2024-07-25 10:43:52.766898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.293 [2024-07-25 10:43:52.770643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.293 [2024-07-25 10:43:52.778838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.293 [2024-07-25 10:43:52.779350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.293 [2024-07-25 10:43:52.779402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.293 [2024-07-25 10:43:52.779433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.293 [2024-07-25 10:43:52.780035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.293 [2024-07-25 10:43:52.780264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.293 [2024-07-25 10:43:52.780277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.293 [2024-07-25 10:43:52.780287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.293 [2024-07-25 10:43:52.782958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.293 [2024-07-25 10:43:52.791664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.293 [2024-07-25 10:43:52.792154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.293 [2024-07-25 10:43:52.792207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.293 [2024-07-25 10:43:52.792239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.294 [2024-07-25 10:43:52.792836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.294 [2024-07-25 10:43:52.793065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.294 [2024-07-25 10:43:52.793075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.294 [2024-07-25 10:43:52.793084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.294 [2024-07-25 10:43:52.795721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.294 [2024-07-25 10:43:52.804503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.294 [2024-07-25 10:43:52.804990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.294 [2024-07-25 10:43:52.805007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.294 [2024-07-25 10:43:52.805016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.294 [2024-07-25 10:43:52.805171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.294 [2024-07-25 10:43:52.805327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.294 [2024-07-25 10:43:52.805337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.294 [2024-07-25 10:43:52.805345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.294 [2024-07-25 10:43:52.807875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.294 [2024-07-25 10:43:52.817280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.294 [2024-07-25 10:43:52.817777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.294 [2024-07-25 10:43:52.817828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.294 [2024-07-25 10:43:52.817859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.294 [2024-07-25 10:43:52.818293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.294 [2024-07-25 10:43:52.818458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.294 [2024-07-25 10:43:52.818468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.294 [2024-07-25 10:43:52.818477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.294 [2024-07-25 10:43:52.821016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.294 [2024-07-25 10:43:52.829970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.294 [2024-07-25 10:43:52.830479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.294 [2024-07-25 10:43:52.830497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.294 [2024-07-25 10:43:52.830506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.294 [2024-07-25 10:43:52.830670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.294 [2024-07-25 10:43:52.830844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.294 [2024-07-25 10:43:52.830854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.294 [2024-07-25 10:43:52.830863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.294 [2024-07-25 10:43:52.833391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.294 [2024-07-25 10:43:52.842606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.294 [2024-07-25 10:43:52.843114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.294 [2024-07-25 10:43:52.843167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.294 [2024-07-25 10:43:52.843198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.294 [2024-07-25 10:43:52.843550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.294 [2024-07-25 10:43:52.843722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.294 [2024-07-25 10:43:52.843732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.294 [2024-07-25 10:43:52.843741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.294 [2024-07-25 10:43:52.846268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.294 [2024-07-25 10:43:52.855386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.294 [2024-07-25 10:43:52.855902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.294 [2024-07-25 10:43:52.855922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.294 [2024-07-25 10:43:52.855932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.294 [2024-07-25 10:43:52.856097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.294 [2024-07-25 10:43:52.856261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.294 [2024-07-25 10:43:52.856271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.294 [2024-07-25 10:43:52.856280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.294 [2024-07-25 10:43:52.858818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.294 [2024-07-25 10:43:52.868088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.294 [2024-07-25 10:43:52.868584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.294 [2024-07-25 10:43:52.868636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.294 [2024-07-25 10:43:52.868668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.294 [2024-07-25 10:43:52.869280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.294 [2024-07-25 10:43:52.869587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.294 [2024-07-25 10:43:52.869597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.294 [2024-07-25 10:43:52.869606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.294 [2024-07-25 10:43:52.872149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.294 [2024-07-25 10:43:52.880796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.294 [2024-07-25 10:43:52.881304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.294 [2024-07-25 10:43:52.881321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.294 [2024-07-25 10:43:52.881330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.294 [2024-07-25 10:43:52.881496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.294 [2024-07-25 10:43:52.881661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.294 [2024-07-25 10:43:52.881671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.294 [2024-07-25 10:43:52.881679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.294 [2024-07-25 10:43:52.884213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.294 [2024-07-25 10:43:52.893432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.294 [2024-07-25 10:43:52.893868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.294 [2024-07-25 10:43:52.893886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.294 [2024-07-25 10:43:52.893895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.294 [2024-07-25 10:43:52.894060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.294 [2024-07-25 10:43:52.894225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.294 [2024-07-25 10:43:52.894235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.294 [2024-07-25 10:43:52.894243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.294 [2024-07-25 10:43:52.896780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.294 [2024-07-25 10:43:52.906150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.294 [2024-07-25 10:43:52.906666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.294 [2024-07-25 10:43:52.906730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.294 [2024-07-25 10:43:52.906763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.294 [2024-07-25 10:43:52.907252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.294 [2024-07-25 10:43:52.907417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.294 [2024-07-25 10:43:52.907427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.294 [2024-07-25 10:43:52.907443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.294 [2024-07-25 10:43:52.909980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.294 [2024-07-25 10:43:52.918913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.294 [2024-07-25 10:43:52.919433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.294 [2024-07-25 10:43:52.919484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.294 [2024-07-25 10:43:52.919515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.294 [2024-07-25 10:43:52.920118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.295 [2024-07-25 10:43:52.920518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.295 [2024-07-25 10:43:52.920528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.295 [2024-07-25 10:43:52.920537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.295 [2024-07-25 10:43:52.923073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.295 [2024-07-25 10:43:52.931569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.295 [2024-07-25 10:43:52.932090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.295 [2024-07-25 10:43:52.932141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.295 [2024-07-25 10:43:52.932173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.295 [2024-07-25 10:43:52.932674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.295 [2024-07-25 10:43:52.932845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.295 [2024-07-25 10:43:52.932856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.295 [2024-07-25 10:43:52.932864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.295 [2024-07-25 10:43:52.935398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.295 [2024-07-25 10:43:52.944339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.295 [2024-07-25 10:43:52.944847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.295 [2024-07-25 10:43:52.944899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.295 [2024-07-25 10:43:52.944931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.295 [2024-07-25 10:43:52.945324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.295 [2024-07-25 10:43:52.945480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.295 [2024-07-25 10:43:52.945490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.295 [2024-07-25 10:43:52.945498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.295 [2024-07-25 10:43:52.948040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.295 [2024-07-25 10:43:52.957120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.295 [2024-07-25 10:43:52.957614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.295 [2024-07-25 10:43:52.957664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.295 [2024-07-25 10:43:52.957696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.295 [2024-07-25 10:43:52.958301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.295 [2024-07-25 10:43:52.958735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.295 [2024-07-25 10:43:52.958746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.295 [2024-07-25 10:43:52.958755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.295 [2024-07-25 10:43:52.961285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.295 [2024-07-25 10:43:52.969795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.295 [2024-07-25 10:43:52.970333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.295 [2024-07-25 10:43:52.970383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.295 [2024-07-25 10:43:52.970415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.295 [2024-07-25 10:43:52.970843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.295 [2024-07-25 10:43:52.971015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.295 [2024-07-25 10:43:52.971026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.295 [2024-07-25 10:43:52.971034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.295 [2024-07-25 10:43:52.973567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.295 [2024-07-25 10:43:52.982564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.295 [2024-07-25 10:43:52.983088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.295 [2024-07-25 10:43:52.983139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.295 [2024-07-25 10:43:52.983172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.295 [2024-07-25 10:43:52.983682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.295 [2024-07-25 10:43:52.983853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.295 [2024-07-25 10:43:52.983864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.295 [2024-07-25 10:43:52.983873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.295 [2024-07-25 10:43:52.986406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.295 [2024-07-25 10:43:52.995365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.556 [2024-07-25 10:43:52.995856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.556 [2024-07-25 10:43:52.995909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.556 [2024-07-25 10:43:52.995941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.556 [2024-07-25 10:43:52.996499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.556 [2024-07-25 10:43:52.996664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.556 [2024-07-25 10:43:52.996674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.556 [2024-07-25 10:43:52.996683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.556 [2024-07-25 10:43:52.999224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.556 [2024-07-25 10:43:53.008219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.556 [2024-07-25 10:43:53.008762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.556 [2024-07-25 10:43:53.008814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.556 [2024-07-25 10:43:53.008846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.556 [2024-07-25 10:43:53.009436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.556 [2024-07-25 10:43:53.009920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.556 [2024-07-25 10:43:53.009936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.556 [2024-07-25 10:43:53.009948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.556 [2024-07-25 10:43:53.013677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.556 [2024-07-25 10:43:53.021328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.556 [2024-07-25 10:43:53.021775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.556 [2024-07-25 10:43:53.021828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.556 [2024-07-25 10:43:53.021861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.556 [2024-07-25 10:43:53.022451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.556 [2024-07-25 10:43:53.022809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.556 [2024-07-25 10:43:53.022821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.556 [2024-07-25 10:43:53.022829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.556 [2024-07-25 10:43:53.025367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.556 [2024-07-25 10:43:53.034075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.556 [2024-07-25 10:43:53.034611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.556 [2024-07-25 10:43:53.034663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.556 [2024-07-25 10:43:53.034695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.556 [2024-07-25 10:43:53.035299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.556 [2024-07-25 10:43:53.035905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.556 [2024-07-25 10:43:53.035949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.556 [2024-07-25 10:43:53.035961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.556 [2024-07-25 10:43:53.038649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.556 [2024-07-25 10:43:53.047044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.556 [2024-07-25 10:43:53.047525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.556 [2024-07-25 10:43:53.047544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.556 [2024-07-25 10:43:53.047554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.556 [2024-07-25 10:43:53.047725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.556 [2024-07-25 10:43:53.047891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.556 [2024-07-25 10:43:53.047903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.556 [2024-07-25 10:43:53.047912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.556 [2024-07-25 10:43:53.050510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.556 [2024-07-25 10:43:53.059981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.556 [2024-07-25 10:43:53.060498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.556 [2024-07-25 10:43:53.060517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.556 [2024-07-25 10:43:53.060526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.556 [2024-07-25 10:43:53.060692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.556 [2024-07-25 10:43:53.060865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.556 [2024-07-25 10:43:53.060877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.556 [2024-07-25 10:43:53.060885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.556 [2024-07-25 10:43:53.063472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.556 [2024-07-25 10:43:53.072718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.556 [2024-07-25 10:43:53.073166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.556 [2024-07-25 10:43:53.073185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.556 [2024-07-25 10:43:53.073194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.556 [2024-07-25 10:43:53.073352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.556 [2024-07-25 10:43:53.073509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.556 [2024-07-25 10:43:53.073520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.556 [2024-07-25 10:43:53.073528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.556 [2024-07-25 10:43:53.075995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.556 [2024-07-25 10:43:53.085434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.556 [2024-07-25 10:43:53.085934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.556 [2024-07-25 10:43:53.085996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.556 [2024-07-25 10:43:53.086029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.556 [2024-07-25 10:43:53.086602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.556 [2024-07-25 10:43:53.086764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.556 [2024-07-25 10:43:53.086775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.556 [2024-07-25 10:43:53.086785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.556 [2024-07-25 10:43:53.089246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.556 [2024-07-25 10:43:53.098101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.556 [2024-07-25 10:43:53.098604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.556 [2024-07-25 10:43:53.098656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.556 [2024-07-25 10:43:53.098689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.556 [2024-07-25 10:43:53.099160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.556 [2024-07-25 10:43:53.099318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.556 [2024-07-25 10:43:53.099329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.556 [2024-07-25 10:43:53.099337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.557 [2024-07-25 10:43:53.101793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.557 [2024-07-25 10:43:53.110792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.557 [2024-07-25 10:43:53.111256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.557 [2024-07-25 10:43:53.111307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.557 [2024-07-25 10:43:53.111340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.557 [2024-07-25 10:43:53.111874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.557 [2024-07-25 10:43:53.112033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.557 [2024-07-25 10:43:53.112043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.557 [2024-07-25 10:43:53.112053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.557 [2024-07-25 10:43:53.114509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.557 [2024-07-25 10:43:53.123504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.557 [2024-07-25 10:43:53.124027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.557 [2024-07-25 10:43:53.124045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.557 [2024-07-25 10:43:53.124054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.557 [2024-07-25 10:43:53.124211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.557 [2024-07-25 10:43:53.124371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.557 [2024-07-25 10:43:53.124382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.557 [2024-07-25 10:43:53.124390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.557 [2024-07-25 10:43:53.126857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.557 [2024-07-25 10:43:53.136283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.557 [2024-07-25 10:43:53.136594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.557 [2024-07-25 10:43:53.136612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.557 [2024-07-25 10:43:53.136622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.557 [2024-07-25 10:43:53.136785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.557 [2024-07-25 10:43:53.136944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.557 [2024-07-25 10:43:53.136954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.557 [2024-07-25 10:43:53.136963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.557 [2024-07-25 10:43:53.139423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.557 [2024-07-25 10:43:53.149014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.557 [2024-07-25 10:43:53.149515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.557 [2024-07-25 10:43:53.149566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.557 [2024-07-25 10:43:53.149598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.557 [2024-07-25 10:43:53.150093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.557 [2024-07-25 10:43:53.150257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.557 [2024-07-25 10:43:53.150268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.557 [2024-07-25 10:43:53.150277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.557 [2024-07-25 10:43:53.152742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.557 [2024-07-25 10:43:53.161741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.557 [2024-07-25 10:43:53.162028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.557 [2024-07-25 10:43:53.162046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.557 [2024-07-25 10:43:53.162055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.557 [2024-07-25 10:43:53.162212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.557 [2024-07-25 10:43:53.162370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.557 [2024-07-25 10:43:53.162380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.557 [2024-07-25 10:43:53.162388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.557 [2024-07-25 10:43:53.164857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.557 [2024-07-25 10:43:53.174439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.557 [2024-07-25 10:43:53.174943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.557 [2024-07-25 10:43:53.174961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.557 [2024-07-25 10:43:53.174970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.557 [2024-07-25 10:43:53.175128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.557 [2024-07-25 10:43:53.175284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.557 [2024-07-25 10:43:53.175294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.557 [2024-07-25 10:43:53.175303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.557 [2024-07-25 10:43:53.177761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.557 [2024-07-25 10:43:53.187190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.557 [2024-07-25 10:43:53.187627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.557 [2024-07-25 10:43:53.187679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.557 [2024-07-25 10:43:53.187711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.557 [2024-07-25 10:43:53.188210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.557 [2024-07-25 10:43:53.188368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.557 [2024-07-25 10:43:53.188379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.557 [2024-07-25 10:43:53.188387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.557 [2024-07-25 10:43:53.190850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.557 [2024-07-25 10:43:53.199845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.557 [2024-07-25 10:43:53.200363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.557 [2024-07-25 10:43:53.200414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.557 [2024-07-25 10:43:53.200447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.557 [2024-07-25 10:43:53.200949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.557 [2024-07-25 10:43:53.201109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.557 [2024-07-25 10:43:53.201120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.557 [2024-07-25 10:43:53.201128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.557 [2024-07-25 10:43:53.203585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.557 [2024-07-25 10:43:53.212581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.557 [2024-07-25 10:43:53.213102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.557 [2024-07-25 10:43:53.213153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.557 [2024-07-25 10:43:53.213193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.557 [2024-07-25 10:43:53.213796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.557 [2024-07-25 10:43:53.214256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.557 [2024-07-25 10:43:53.214266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.557 [2024-07-25 10:43:53.214275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.557 [2024-07-25 10:43:53.216737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.557 [2024-07-25 10:43:53.225295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.557 [2024-07-25 10:43:53.225742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.557 [2024-07-25 10:43:53.225795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.557 [2024-07-25 10:43:53.225828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.557 [2024-07-25 10:43:53.226249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.557 [2024-07-25 10:43:53.226407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.558 [2024-07-25 10:43:53.226417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.558 [2024-07-25 10:43:53.226427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.558 [2024-07-25 10:43:53.228893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.558 [2024-07-25 10:43:53.238026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.558 [2024-07-25 10:43:53.238517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.558 [2024-07-25 10:43:53.238568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.558 [2024-07-25 10:43:53.238601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.558 [2024-07-25 10:43:53.239098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.558 [2024-07-25 10:43:53.239258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.558 [2024-07-25 10:43:53.239269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.558 [2024-07-25 10:43:53.239279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.558 [2024-07-25 10:43:53.241742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.558 [2024-07-25 10:43:53.250851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.558 [2024-07-25 10:43:53.251376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.558 [2024-07-25 10:43:53.251429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.558 [2024-07-25 10:43:53.251462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.558 [2024-07-25 10:43:53.252071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.558 [2024-07-25 10:43:53.252569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.558 [2024-07-25 10:43:53.252588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.558 [2024-07-25 10:43:53.252601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.558 [2024-07-25 10:43:53.256339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.818 [2024-07-25 10:43:53.264289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.818 [2024-07-25 10:43:53.264798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.818 [2024-07-25 10:43:53.264849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.818 [2024-07-25 10:43:53.264882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.818 [2024-07-25 10:43:53.265471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.818 [2024-07-25 10:43:53.265663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.818 [2024-07-25 10:43:53.265674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.818 [2024-07-25 10:43:53.265683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.818 [2024-07-25 10:43:53.268147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.818 [2024-07-25 10:43:53.277006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.818 [2024-07-25 10:43:53.277507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.818 [2024-07-25 10:43:53.277560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.818 [2024-07-25 10:43:53.277593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.818 [2024-07-25 10:43:53.278197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.818 [2024-07-25 10:43:53.278579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.818 [2024-07-25 10:43:53.278590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.818 [2024-07-25 10:43:53.278598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.818 [2024-07-25 10:43:53.281060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.818 [2024-07-25 10:43:53.289753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.818 [2024-07-25 10:43:53.290184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.818 [2024-07-25 10:43:53.290236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.818 [2024-07-25 10:43:53.290268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.818 [2024-07-25 10:43:53.290876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.818 [2024-07-25 10:43:53.291478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.818 [2024-07-25 10:43:53.291489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.818 [2024-07-25 10:43:53.291499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.818 [2024-07-25 10:43:53.294173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.818 [2024-07-25 10:43:53.302645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.818 [2024-07-25 10:43:53.303142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.818 [2024-07-25 10:43:53.303160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.818 [2024-07-25 10:43:53.303170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.818 [2024-07-25 10:43:53.303335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.818 [2024-07-25 10:43:53.303500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.818 [2024-07-25 10:43:53.303511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.818 [2024-07-25 10:43:53.303520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.818 [2024-07-25 10:43:53.306118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.818 [2024-07-25 10:43:53.315479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.818 [2024-07-25 10:43:53.315995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.818 [2024-07-25 10:43:53.316012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.818 [2024-07-25 10:43:53.316022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.819 [2024-07-25 10:43:53.316179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.819 [2024-07-25 10:43:53.316336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.819 [2024-07-25 10:43:53.316346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.819 [2024-07-25 10:43:53.316355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.819 [2024-07-25 10:43:53.318946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.819 [2024-07-25 10:43:53.328226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.819 [2024-07-25 10:43:53.328679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.819 [2024-07-25 10:43:53.328745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.819 [2024-07-25 10:43:53.328780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.819 [2024-07-25 10:43:53.329257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.819 [2024-07-25 10:43:53.329415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.819 [2024-07-25 10:43:53.329426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.819 [2024-07-25 10:43:53.329436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.819 [2024-07-25 10:43:53.331898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.819 [2024-07-25 10:43:53.341204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.819 [2024-07-25 10:43:53.341671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.819 [2024-07-25 10:43:53.341736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.819 [2024-07-25 10:43:53.341770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.819 [2024-07-25 10:43:53.342229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.819 [2024-07-25 10:43:53.342401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.819 [2024-07-25 10:43:53.342412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.819 [2024-07-25 10:43:53.342422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.819 [2024-07-25 10:43:53.344982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.819 [2024-07-25 10:43:53.353985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.819 [2024-07-25 10:43:53.354477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.819 [2024-07-25 10:43:53.354529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.819 [2024-07-25 10:43:53.354561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.819 [2024-07-25 10:43:53.355172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.819 [2024-07-25 10:43:53.355586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.819 [2024-07-25 10:43:53.355597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.819 [2024-07-25 10:43:53.355606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.819 [2024-07-25 10:43:53.358068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.819 [2024-07-25 10:43:53.366637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.819 [2024-07-25 10:43:53.367105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.819 [2024-07-25 10:43:53.367124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.819 [2024-07-25 10:43:53.367133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.819 [2024-07-25 10:43:53.367290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.819 [2024-07-25 10:43:53.367448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.819 [2024-07-25 10:43:53.367459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.819 [2024-07-25 10:43:53.367468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.819 [2024-07-25 10:43:53.369932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.819 [2024-07-25 10:43:53.379365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.819 [2024-07-25 10:43:53.379886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.819 [2024-07-25 10:43:53.379939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.819 [2024-07-25 10:43:53.379971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.819 [2024-07-25 10:43:53.380463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.819 [2024-07-25 10:43:53.380621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.819 [2024-07-25 10:43:53.380631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.819 [2024-07-25 10:43:53.380643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.819 [2024-07-25 10:43:53.383118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.819 [2024-07-25 10:43:53.392113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.819 [2024-07-25 10:43:53.392632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.819 [2024-07-25 10:43:53.392684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.819 [2024-07-25 10:43:53.392733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.819 [2024-07-25 10:43:53.393324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.819 [2024-07-25 10:43:53.393662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.819 [2024-07-25 10:43:53.393673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.819 [2024-07-25 10:43:53.393682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.819 [2024-07-25 10:43:53.396219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.819 [2024-07-25 10:43:53.404784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.819 [2024-07-25 10:43:53.405163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.819 [2024-07-25 10:43:53.405216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.819 [2024-07-25 10:43:53.405249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.819 [2024-07-25 10:43:53.405708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.819 [2024-07-25 10:43:53.405872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.819 [2024-07-25 10:43:53.405882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.819 [2024-07-25 10:43:53.405891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.819 [2024-07-25 10:43:53.408348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.819 [2024-07-25 10:43:53.417481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.819 [2024-07-25 10:43:53.417978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.819 [2024-07-25 10:43:53.418031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.819 [2024-07-25 10:43:53.418063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.819 [2024-07-25 10:43:53.418552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.819 [2024-07-25 10:43:53.418710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.819 [2024-07-25 10:43:53.418728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.819 [2024-07-25 10:43:53.418737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.819 [2024-07-25 10:43:53.421196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.819 [2024-07-25 10:43:53.430186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.819 [2024-07-25 10:43:53.430690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.819 [2024-07-25 10:43:53.430753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.819 [2024-07-25 10:43:53.430787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.819 [2024-07-25 10:43:53.431376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.819 [2024-07-25 10:43:53.431899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.819 [2024-07-25 10:43:53.431910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.819 [2024-07-25 10:43:53.431919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.819 [2024-07-25 10:43:53.434378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.819 [2024-07-25 10:43:53.442942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.819 [2024-07-25 10:43:53.443437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.819 [2024-07-25 10:43:53.443488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.819 [2024-07-25 10:43:53.443520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.819 [2024-07-25 10:43:53.444128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.819 [2024-07-25 10:43:53.444699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.820 [2024-07-25 10:43:53.444709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.820 [2024-07-25 10:43:53.444721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.820 [2024-07-25 10:43:53.448078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.820 [2024-07-25 10:43:53.456562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.820 [2024-07-25 10:43:53.457087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.820 [2024-07-25 10:43:53.457141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.820 [2024-07-25 10:43:53.457173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.820 [2024-07-25 10:43:53.457629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.820 [2024-07-25 10:43:53.457793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.820 [2024-07-25 10:43:53.457804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.820 [2024-07-25 10:43:53.457813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.820 [2024-07-25 10:43:53.460272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.820 [2024-07-25 10:43:53.469269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.820 [2024-07-25 10:43:53.469795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.820 [2024-07-25 10:43:53.469849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.820 [2024-07-25 10:43:53.469881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.820 [2024-07-25 10:43:53.470481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.820 [2024-07-25 10:43:53.470826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.820 [2024-07-25 10:43:53.470837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.820 [2024-07-25 10:43:53.470845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.820 [2024-07-25 10:43:53.473391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.820 [2024-07-25 10:43:53.481948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.820 [2024-07-25 10:43:53.482373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.820 [2024-07-25 10:43:53.482391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.820 [2024-07-25 10:43:53.482400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.820 [2024-07-25 10:43:53.482556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.820 [2024-07-25 10:43:53.482719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.820 [2024-07-25 10:43:53.482731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.820 [2024-07-25 10:43:53.482739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.820 [2024-07-25 10:43:53.485195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.820 [2024-07-25 10:43:53.494614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.820 [2024-07-25 10:43:53.495084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.820 [2024-07-25 10:43:53.495137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.820 [2024-07-25 10:43:53.495169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.820 [2024-07-25 10:43:53.495638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.820 [2024-07-25 10:43:53.495803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.820 [2024-07-25 10:43:53.495814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.820 [2024-07-25 10:43:53.495822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.820 [2024-07-25 10:43:53.498282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.820 [2024-07-25 10:43:53.507265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.820 [2024-07-25 10:43:53.507777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.820 [2024-07-25 10:43:53.507829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:49.820 [2024-07-25 10:43:53.507861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:49.820 [2024-07-25 10:43:53.508453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:49.820 [2024-07-25 10:43:53.509047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.820 [2024-07-25 10:43:53.509058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.820 [2024-07-25 10:43:53.509070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.820 [2024-07-25 10:43:53.511608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.820 [2024-07-25 10:43:53.520120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.080 [2024-07-25 10:43:53.520599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.080 [2024-07-25 10:43:53.520618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.080 [2024-07-25 10:43:53.520628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.080 [2024-07-25 10:43:53.520803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.080 [2024-07-25 10:43:53.520970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.080 [2024-07-25 10:43:53.520981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.080 [2024-07-25 10:43:53.520990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.080 [2024-07-25 10:43:53.523564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.080 [2024-07-25 10:43:53.532887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.080 [2024-07-25 10:43:53.533395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.080 [2024-07-25 10:43:53.533413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.080 [2024-07-25 10:43:53.533421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.080 [2024-07-25 10:43:53.533577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.080 [2024-07-25 10:43:53.533740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.080 [2024-07-25 10:43:53.533751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.080 [2024-07-25 10:43:53.533759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.080 [2024-07-25 10:43:53.536216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.080 [2024-07-25 10:43:53.545652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.081 [2024-07-25 10:43:53.546161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.081 [2024-07-25 10:43:53.546179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.081 [2024-07-25 10:43:53.546188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.081 [2024-07-25 10:43:53.546344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.081 [2024-07-25 10:43:53.546501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.081 [2024-07-25 10:43:53.546511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.081 [2024-07-25 10:43:53.546520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.081 [2024-07-25 10:43:53.549213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.081 [2024-07-25 10:43:53.558533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.081 [2024-07-25 10:43:53.559037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.081 [2024-07-25 10:43:53.559061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.081 [2024-07-25 10:43:53.559071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.081 [2024-07-25 10:43:53.559237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.081 [2024-07-25 10:43:53.559402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.081 [2024-07-25 10:43:53.559413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.081 [2024-07-25 10:43:53.559421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.081 [2024-07-25 10:43:53.562020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.081 [2024-07-25 10:43:53.571366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.081 [2024-07-25 10:43:53.571820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.081 [2024-07-25 10:43:53.571839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.081 [2024-07-25 10:43:53.571848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.081 [2024-07-25 10:43:53.572014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.081 [2024-07-25 10:43:53.572179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.081 [2024-07-25 10:43:53.572191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.081 [2024-07-25 10:43:53.572200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.081 [2024-07-25 10:43:53.574801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.081 [2024-07-25 10:43:53.584058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.081 [2024-07-25 10:43:53.584548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.081 [2024-07-25 10:43:53.584566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.081 [2024-07-25 10:43:53.584575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.081 [2024-07-25 10:43:53.584737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.081 [2024-07-25 10:43:53.584895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.081 [2024-07-25 10:43:53.584906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.081 [2024-07-25 10:43:53.584914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.081 [2024-07-25 10:43:53.587369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.081 [2024-07-25 10:43:53.596890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.081 [2024-07-25 10:43:53.597374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.081 [2024-07-25 10:43:53.597391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.081 [2024-07-25 10:43:53.597400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.081 [2024-07-25 10:43:53.597557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.081 [2024-07-25 10:43:53.597723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.081 [2024-07-25 10:43:53.597735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.081 [2024-07-25 10:43:53.597744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.081 [2024-07-25 10:43:53.600200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.081 [2024-07-25 10:43:53.609713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.081 [2024-07-25 10:43:53.610065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.081 [2024-07-25 10:43:53.610083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.081 [2024-07-25 10:43:53.610092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.081 [2024-07-25 10:43:53.610249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.081 [2024-07-25 10:43:53.610406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.081 [2024-07-25 10:43:53.610417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.081 [2024-07-25 10:43:53.610425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.081 [2024-07-25 10:43:53.612887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.081 [2024-07-25 10:43:53.622461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.081 [2024-07-25 10:43:53.622876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.081 [2024-07-25 10:43:53.622894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.081 [2024-07-25 10:43:53.622903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.081 [2024-07-25 10:43:53.623059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.081 [2024-07-25 10:43:53.623217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.081 [2024-07-25 10:43:53.623228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.081 [2024-07-25 10:43:53.623237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.081 [2024-07-25 10:43:53.625696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.081 [2024-07-25 10:43:53.635365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.081 [2024-07-25 10:43:53.635894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.081 [2024-07-25 10:43:53.635946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.081 [2024-07-25 10:43:53.635979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.081 [2024-07-25 10:43:53.636305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.081 [2024-07-25 10:43:53.636482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.081 [2024-07-25 10:43:53.636495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.081 [2024-07-25 10:43:53.636504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.081 [2024-07-25 10:43:53.639221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.081 [2024-07-25 10:43:53.648140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.081 [2024-07-25 10:43:53.649066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.081 [2024-07-25 10:43:53.649090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.081 [2024-07-25 10:43:53.649101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.081 [2024-07-25 10:43:53.649265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.081 [2024-07-25 10:43:53.649423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.081 [2024-07-25 10:43:53.649434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.081 [2024-07-25 10:43:53.649442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.081 [2024-07-25 10:43:53.651909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.081 [2024-07-25 10:43:53.660908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.081 [2024-07-25 10:43:53.661362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.081 [2024-07-25 10:43:53.661415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.081 [2024-07-25 10:43:53.661448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.081 [2024-07-25 10:43:53.661982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.081 [2024-07-25 10:43:53.662142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.081 [2024-07-25 10:43:53.662153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.081 [2024-07-25 10:43:53.662161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.081 [2024-07-25 10:43:53.664626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.081 [2024-07-25 10:43:53.673638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.081 [2024-07-25 10:43:53.674158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.082 [2024-07-25 10:43:53.674212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.082 [2024-07-25 10:43:53.674245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.082 [2024-07-25 10:43:53.674609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.082 [2024-07-25 10:43:53.674772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.082 [2024-07-25 10:43:53.674783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.082 [2024-07-25 10:43:53.674792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.082 [2024-07-25 10:43:53.677251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.082 [2024-07-25 10:43:53.686400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.082 [2024-07-25 10:43:53.686861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.082 [2024-07-25 10:43:53.686879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.082 [2024-07-25 10:43:53.686892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.082 [2024-07-25 10:43:53.687049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.082 [2024-07-25 10:43:53.687206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.082 [2024-07-25 10:43:53.687217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.082 [2024-07-25 10:43:53.687225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.082 [2024-07-25 10:43:53.689685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.082 [2024-07-25 10:43:53.699131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.082 [2024-07-25 10:43:53.699567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.082 [2024-07-25 10:43:53.699619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.082 [2024-07-25 10:43:53.699652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.082 [2024-07-25 10:43:53.700145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.082 [2024-07-25 10:43:53.700304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.082 [2024-07-25 10:43:53.700315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.082 [2024-07-25 10:43:53.700324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.082 [2024-07-25 10:43:53.702788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.082 [2024-07-25 10:43:53.711784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.082 [2024-07-25 10:43:53.712204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.082 [2024-07-25 10:43:53.712256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.082 [2024-07-25 10:43:53.712289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.082 [2024-07-25 10:43:53.712898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.082 [2024-07-25 10:43:53.713360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.082 [2024-07-25 10:43:53.713372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.082 [2024-07-25 10:43:53.713380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.082 [2024-07-25 10:43:53.715843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.082 [2024-07-25 10:43:53.724548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.082 [2024-07-25 10:43:53.725066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.082 [2024-07-25 10:43:53.725118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.082 [2024-07-25 10:43:53.725150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.082 [2024-07-25 10:43:53.725596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.082 [2024-07-25 10:43:53.725761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.082 [2024-07-25 10:43:53.725776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.082 [2024-07-25 10:43:53.725785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.082 [2024-07-25 10:43:53.728241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.082 [2024-07-25 10:43:53.737241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.082 [2024-07-25 10:43:53.737665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.082 [2024-07-25 10:43:53.737730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.082 [2024-07-25 10:43:53.737764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.082 [2024-07-25 10:43:53.738264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.082 [2024-07-25 10:43:53.738422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.082 [2024-07-25 10:43:53.738433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.082 [2024-07-25 10:43:53.738441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.082 [2024-07-25 10:43:53.740902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.082 [2024-07-25 10:43:53.749906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.082 [2024-07-25 10:43:53.750351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.082 [2024-07-25 10:43:53.750402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.082 [2024-07-25 10:43:53.750434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.082 [2024-07-25 10:43:53.751040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.082 [2024-07-25 10:43:53.751530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.082 [2024-07-25 10:43:53.751540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.082 [2024-07-25 10:43:53.751549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.082 [2024-07-25 10:43:53.754009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.082 [2024-07-25 10:43:53.762579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.082 [2024-07-25 10:43:53.763086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.082 [2024-07-25 10:43:53.763104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.082 [2024-07-25 10:43:53.763113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.082 [2024-07-25 10:43:53.763270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.082 [2024-07-25 10:43:53.763427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.082 [2024-07-25 10:43:53.763438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.082 [2024-07-25 10:43:53.763447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.082 [2024-07-25 10:43:53.765912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.082 [2024-07-25 10:43:53.775366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.082 [2024-07-25 10:43:53.776820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.082 [2024-07-25 10:43:53.776844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.082 [2024-07-25 10:43:53.776854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.082 [2024-07-25 10:43:53.777018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.082 [2024-07-25 10:43:53.777175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.082 [2024-07-25 10:43:53.777185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.082 [2024-07-25 10:43:53.777194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.082 [2024-07-25 10:43:53.779787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.343 [2024-07-25 10:43:53.788076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.343 [2024-07-25 10:43:53.788459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.343 [2024-07-25 10:43:53.788511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.343 [2024-07-25 10:43:53.788545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.343 [2024-07-25 10:43:53.789155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.343 [2024-07-25 10:43:53.789594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.343 [2024-07-25 10:43:53.789606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.343 [2024-07-25 10:43:53.789615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.343 [2024-07-25 10:43:53.792264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.343 [2024-07-25 10:43:53.800954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.343 [2024-07-25 10:43:53.801453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.343 [2024-07-25 10:43:53.801472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.343 [2024-07-25 10:43:53.801482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.343 [2024-07-25 10:43:53.801651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.343 [2024-07-25 10:43:53.801827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.343 [2024-07-25 10:43:53.801838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.343 [2024-07-25 10:43:53.801848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.343 [2024-07-25 10:43:53.804519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.343 [2024-07-25 10:43:53.813824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.343 [2024-07-25 10:43:53.814341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.343 [2024-07-25 10:43:53.814360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.343 [2024-07-25 10:43:53.814371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.343 [2024-07-25 10:43:53.814544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.343 [2024-07-25 10:43:53.814721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.343 [2024-07-25 10:43:53.814732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.343 [2024-07-25 10:43:53.814741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.343 [2024-07-25 10:43:53.817407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.343 [2024-07-25 10:43:53.826704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.343 [2024-07-25 10:43:53.827207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.343 [2024-07-25 10:43:53.827224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.343 [2024-07-25 10:43:53.827234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.343 [2024-07-25 10:43:53.827404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.343 [2024-07-25 10:43:53.827574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.343 [2024-07-25 10:43:53.827586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.343 [2024-07-25 10:43:53.827595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.343 [2024-07-25 10:43:53.830299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-07-25 10:43:53.839704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-07-25 10:43:53.840232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-07-25 10:43:53.840251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-07-25 10:43:53.840260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.344 [2024-07-25 10:43:53.840431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.344 [2024-07-25 10:43:53.840601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-07-25 10:43:53.840612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-07-25 10:43:53.840621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-07-25 10:43:53.843318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-07-25 10:43:53.852870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-07-25 10:43:53.853378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-07-25 10:43:53.853397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-07-25 10:43:53.853408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.344 [2024-07-25 10:43:53.853588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.344 [2024-07-25 10:43:53.853777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-07-25 10:43:53.853789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-07-25 10:43:53.853802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-07-25 10:43:53.856483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-07-25 10:43:53.865787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-07-25 10:43:53.866239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-07-25 10:43:53.866257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-07-25 10:43:53.866267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.344 [2024-07-25 10:43:53.866438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.344 [2024-07-25 10:43:53.866608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-07-25 10:43:53.866620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-07-25 10:43:53.866629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-07-25 10:43:53.869305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-07-25 10:43:53.878963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-07-25 10:43:53.879351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-07-25 10:43:53.879369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-07-25 10:43:53.879379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.344 [2024-07-25 10:43:53.879550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.344 [2024-07-25 10:43:53.879726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-07-25 10:43:53.879737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-07-25 10:43:53.879747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-07-25 10:43:53.882417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-07-25 10:43:53.891887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-07-25 10:43:53.892314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-07-25 10:43:53.892333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-07-25 10:43:53.892343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.344 [2024-07-25 10:43:53.892513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.344 [2024-07-25 10:43:53.892683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-07-25 10:43:53.892694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-07-25 10:43:53.892703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-07-25 10:43:53.895376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-07-25 10:43:53.904839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-07-25 10:43:53.905312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-07-25 10:43:53.905363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-07-25 10:43:53.905396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.344 [2024-07-25 10:43:53.905912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.344 [2024-07-25 10:43:53.906079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-07-25 10:43:53.906090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-07-25 10:43:53.906099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-07-25 10:43:53.908688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-07-25 10:43:53.917611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-07-25 10:43:53.918055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-07-25 10:43:53.918109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-07-25 10:43:53.918141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.344 [2024-07-25 10:43:53.918651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.344 [2024-07-25 10:43:53.918815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-07-25 10:43:53.918826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-07-25 10:43:53.918834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-07-25 10:43:53.921287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-07-25 10:43:53.930293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-07-25 10:43:53.930706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-07-25 10:43:53.930728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-07-25 10:43:53.930738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.344 [2024-07-25 10:43:53.930894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.344 [2024-07-25 10:43:53.931052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-07-25 10:43:53.931062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-07-25 10:43:53.931071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-07-25 10:43:53.933523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-07-25 10:43:53.942977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-07-25 10:43:53.943391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-07-25 10:43:53.943408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-07-25 10:43:53.943417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.344 [2024-07-25 10:43:53.943573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.344 [2024-07-25 10:43:53.943738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-07-25 10:43:53.943750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-07-25 10:43:53.943758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-07-25 10:43:53.946218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-07-25 10:43:53.955659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-07-25 10:43:53.956086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-07-25 10:43:53.956139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-07-25 10:43:53.956171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.344 [2024-07-25 10:43:53.956773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.344 [2024-07-25 10:43:53.957104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-07-25 10:43:53.957115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-07-25 10:43:53.957124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-07-25 10:43:53.959584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.345 [2024-07-25 10:43:53.968435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.345 [2024-07-25 10:43:53.968865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.345 [2024-07-25 10:43:53.968883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.345 [2024-07-25 10:43:53.968893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.345 [2024-07-25 10:43:53.969050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.345 [2024-07-25 10:43:53.969206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.345 [2024-07-25 10:43:53.969217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.345 [2024-07-25 10:43:53.969225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.345 [2024-07-25 10:43:53.971689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.345 [2024-07-25 10:43:53.981143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.345 [2024-07-25 10:43:53.981527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.345 [2024-07-25 10:43:53.981579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.345 [2024-07-25 10:43:53.981611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.345 [2024-07-25 10:43:53.982088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.345 [2024-07-25 10:43:53.982246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.345 [2024-07-25 10:43:53.982257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.345 [2024-07-25 10:43:53.982266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.345 [2024-07-25 10:43:53.984733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.345 [2024-07-25 10:43:53.993882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.345 [2024-07-25 10:43:53.994295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.345 [2024-07-25 10:43:53.994346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.345 [2024-07-25 10:43:53.994378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.345 [2024-07-25 10:43:53.994985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.345 [2024-07-25 10:43:53.995349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.345 [2024-07-25 10:43:53.995360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.345 [2024-07-25 10:43:53.995369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4048676 Killed "${NVMF_APP[@]}" "$@" 00:28:50.345 10:43:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:50.345 10:43:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:50.345 10:43:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:50.345 [2024-07-25 10:43:53.997977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.345 10:43:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:50.345 10:43:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.345 [2024-07-25 10:43:54.006798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.345 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=4050210 00:28:50.345 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 4050210 00:28:50.345 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:50.345 [2024-07-25 10:43:54.007232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.345 [2024-07-25 10:43:54.007250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.345 [2024-07-25 10:43:54.007260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.345 [2024-07-25 10:43:54.007431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.345 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 4050210 ']' 00:28:50.345 [2024-07-25 10:43:54.007601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.345 [2024-07-25 10:43:54.007614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.345 [2024-07-25 10:43:54.007623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.345 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.345 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:50.345 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.345 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:50.345 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.345 [2024-07-25 10:43:54.010303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.345 [2024-07-25 10:43:54.019764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.345 [2024-07-25 10:43:54.020246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.345 [2024-07-25 10:43:54.020265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.345 [2024-07-25 10:43:54.020276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.345 [2024-07-25 10:43:54.020445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.345 [2024-07-25 10:43:54.020617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.345 [2024-07-25 10:43:54.020628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.345 [2024-07-25 10:43:54.020637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.345 [2024-07-25 10:43:54.023306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.345 [2024-07-25 10:43:54.032775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.345 [2024-07-25 10:43:54.033159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.345 [2024-07-25 10:43:54.033179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.345 [2024-07-25 10:43:54.033189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.345 [2024-07-25 10:43:54.033359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.345 [2024-07-25 10:43:54.033530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.345 [2024-07-25 10:43:54.033541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.345 [2024-07-25 10:43:54.033551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.345 [2024-07-25 10:43:54.036224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.606 [2024-07-25 10:43:54.045668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.606 [2024-07-25 10:43:54.046059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.606 [2024-07-25 10:43:54.046078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.606 [2024-07-25 10:43:54.046089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.606 [2024-07-25 10:43:54.046260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.606 [2024-07-25 10:43:54.046431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.606 [2024-07-25 10:43:54.046443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.606 [2024-07-25 10:43:54.046452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.606 [2024-07-25 10:43:54.049094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.606 [2024-07-25 10:43:54.057265] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:28:50.606 [2024-07-25 10:43:54.057309] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.606 [2024-07-25 10:43:54.058605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.606 [2024-07-25 10:43:54.059024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.606 [2024-07-25 10:43:54.059043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.606 [2024-07-25 10:43:54.059054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.606 [2024-07-25 10:43:54.059224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.606 [2024-07-25 10:43:54.059395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.606 [2024-07-25 10:43:54.059407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.606 [2024-07-25 10:43:54.059416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.606 [2024-07-25 10:43:54.062092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.606 [2024-07-25 10:43:54.071550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.606 [2024-07-25 10:43:54.071949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.606 [2024-07-25 10:43:54.071967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.606 [2024-07-25 10:43:54.071977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.606 [2024-07-25 10:43:54.072148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.606 [2024-07-25 10:43:54.072318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.606 [2024-07-25 10:43:54.072330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.606 [2024-07-25 10:43:54.072340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.606 [2024-07-25 10:43:54.075019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.606 [2024-07-25 10:43:54.084514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.606 [2024-07-25 10:43:54.084993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.606 [2024-07-25 10:43:54.085012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.606 [2024-07-25 10:43:54.085022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.606 [2024-07-25 10:43:54.085188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.606 [2024-07-25 10:43:54.085354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.606 [2024-07-25 10:43:54.085364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.606 [2024-07-25 10:43:54.085374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.606 [2024-07-25 10:43:54.088100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.606 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.606 [2024-07-25 10:43:54.097527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.606 [2024-07-25 10:43:54.097969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.606 [2024-07-25 10:43:54.097992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.606 [2024-07-25 10:43:54.098002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.606 [2024-07-25 10:43:54.098173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.606 [2024-07-25 10:43:54.098344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.606 [2024-07-25 10:43:54.098355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.606 [2024-07-25 10:43:54.098364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.606 [2024-07-25 10:43:54.101037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.606 [2024-07-25 10:43:54.110496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.606 [2024-07-25 10:43:54.110928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.606 [2024-07-25 10:43:54.110948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.606 [2024-07-25 10:43:54.110958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.607 [2024-07-25 10:43:54.111126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.607 [2024-07-25 10:43:54.111292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-07-25 10:43:54.111303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-07-25 10:43:54.111313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-07-25 10:43:54.113916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-07-25 10:43:54.123418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-07-25 10:43:54.123887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-07-25 10:43:54.123906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-07-25 10:43:54.123916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.607 [2024-07-25 10:43:54.124082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.607 [2024-07-25 10:43:54.124248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-07-25 10:43:54.124259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-07-25 10:43:54.124270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-07-25 10:43:54.126863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-07-25 10:43:54.132619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:50.607 [2024-07-25 10:43:54.136222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-07-25 10:43:54.136646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-07-25 10:43:54.136664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-07-25 10:43:54.136674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.607 [2024-07-25 10:43:54.136847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.607 [2024-07-25 10:43:54.137017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-07-25 10:43:54.137028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-07-25 10:43:54.137037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-07-25 10:43:54.139634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-07-25 10:43:54.149153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-07-25 10:43:54.149579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-07-25 10:43:54.149598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-07-25 10:43:54.149608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.607 [2024-07-25 10:43:54.149778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.607 [2024-07-25 10:43:54.149944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-07-25 10:43:54.149955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-07-25 10:43:54.149964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-07-25 10:43:54.152559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-07-25 10:43:54.162055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-07-25 10:43:54.162572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-07-25 10:43:54.162589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-07-25 10:43:54.162599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.607 [2024-07-25 10:43:54.162772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.607 [2024-07-25 10:43:54.162938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-07-25 10:43:54.162949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-07-25 10:43:54.162958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-07-25 10:43:54.165554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-07-25 10:43:54.174923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-07-25 10:43:54.175497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-07-25 10:43:54.175520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-07-25 10:43:54.175531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.607 [2024-07-25 10:43:54.175705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.607 [2024-07-25 10:43:54.175882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-07-25 10:43:54.175895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-07-25 10:43:54.175904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-07-25 10:43:54.178578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-07-25 10:43:54.187887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-07-25 10:43:54.188262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-07-25 10:43:54.188282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-07-25 10:43:54.188293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.607 [2024-07-25 10:43:54.188464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.607 [2024-07-25 10:43:54.188635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-07-25 10:43:54.188646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-07-25 10:43:54.188656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-07-25 10:43:54.191335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-07-25 10:43:54.200825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-07-25 10:43:54.201264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-07-25 10:43:54.201283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-07-25 10:43:54.201292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.607 [2024-07-25 10:43:54.201458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.607 [2024-07-25 10:43:54.201624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-07-25 10:43:54.201635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-07-25 10:43:54.201644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-07-25 10:43:54.204244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-07-25 10:43:54.206636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.607 [2024-07-25 10:43:54.206664] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.607 [2024-07-25 10:43:54.206674] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.607 [2024-07-25 10:43:54.206682] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.607 [2024-07-25 10:43:54.206689] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.607 [2024-07-25 10:43:54.206731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.607 [2024-07-25 10:43:54.206777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.607 [2024-07-25 10:43:54.206780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.607 [2024-07-25 10:43:54.213783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-07-25 10:43:54.214282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-07-25 10:43:54.214302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-07-25 10:43:54.214313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.607 [2024-07-25 10:43:54.214484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.607 [2024-07-25 10:43:54.214658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-07-25 10:43:54.214670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-07-25 10:43:54.214680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-07-25 10:43:54.217354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.608 [2024-07-25 10:43:54.226655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.608 [2024-07-25 10:43:54.227204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.608 [2024-07-25 10:43:54.227225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.608 [2024-07-25 10:43:54.227235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.608 [2024-07-25 10:43:54.227402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.608 [2024-07-25 10:43:54.227568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.608 [2024-07-25 10:43:54.227580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.608 [2024-07-25 10:43:54.227589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.608 [2024-07-25 10:43:54.230268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.608 [2024-07-25 10:43:54.239564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.608 [2024-07-25 10:43:54.240138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.608 [2024-07-25 10:43:54.240158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.608 [2024-07-25 10:43:54.240169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.608 [2024-07-25 10:43:54.240340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.608 [2024-07-25 10:43:54.240511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.608 [2024-07-25 10:43:54.240523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.608 [2024-07-25 10:43:54.240532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.608 [2024-07-25 10:43:54.243218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.608 [2024-07-25 10:43:54.252512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.608 [2024-07-25 10:43:54.253038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.608 [2024-07-25 10:43:54.253058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.608 [2024-07-25 10:43:54.253068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.608 [2024-07-25 10:43:54.253240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.608 [2024-07-25 10:43:54.253412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.608 [2024-07-25 10:43:54.253424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.608 [2024-07-25 10:43:54.253433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.608 [2024-07-25 10:43:54.256115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.608 [2024-07-25 10:43:54.265416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.608 [2024-07-25 10:43:54.265955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.608 [2024-07-25 10:43:54.265976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.608 [2024-07-25 10:43:54.265987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.608 [2024-07-25 10:43:54.266159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.608 [2024-07-25 10:43:54.266329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.608 [2024-07-25 10:43:54.266341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.608 [2024-07-25 10:43:54.266351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.608 [2024-07-25 10:43:54.269022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.608 [2024-07-25 10:43:54.278326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.608 [2024-07-25 10:43:54.278887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.608 [2024-07-25 10:43:54.278907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.608 [2024-07-25 10:43:54.278917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.608 [2024-07-25 10:43:54.279089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.608 [2024-07-25 10:43:54.279260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.608 [2024-07-25 10:43:54.279271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.608 [2024-07-25 10:43:54.279280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.608 [2024-07-25 10:43:54.281957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.608 [2024-07-25 10:43:54.291286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.608 [2024-07-25 10:43:54.291790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.608 [2024-07-25 10:43:54.291810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.608 [2024-07-25 10:43:54.291820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.608 [2024-07-25 10:43:54.291992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.608 [2024-07-25 10:43:54.292163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.608 [2024-07-25 10:43:54.292174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.608 [2024-07-25 10:43:54.292184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.608 [2024-07-25 10:43:54.294851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.608 [2024-07-25 10:43:54.304312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.608 [2024-07-25 10:43:54.304833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.608 [2024-07-25 10:43:54.304856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.608 [2024-07-25 10:43:54.304867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.608 [2024-07-25 10:43:54.305037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.608 [2024-07-25 10:43:54.305208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.608 [2024-07-25 10:43:54.305219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.608 [2024-07-25 10:43:54.305228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.608 [2024-07-25 10:43:54.307901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.868 [2024-07-25 10:43:54.317204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.868 [2024-07-25 10:43:54.317632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.868 [2024-07-25 10:43:54.317650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.868 [2024-07-25 10:43:54.317660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.868 [2024-07-25 10:43:54.317836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.868 [2024-07-25 10:43:54.318007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.868 [2024-07-25 10:43:54.318019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.868 [2024-07-25 10:43:54.318028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.868 [2024-07-25 10:43:54.320689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.868 [2024-07-25 10:43:54.330217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.868 [2024-07-25 10:43:54.330728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.868 [2024-07-25 10:43:54.330748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.868 [2024-07-25 10:43:54.330759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.868 [2024-07-25 10:43:54.330931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.868 [2024-07-25 10:43:54.331101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.868 [2024-07-25 10:43:54.331112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.868 [2024-07-25 10:43:54.331122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.868 [2024-07-25 10:43:54.333792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.868 [2024-07-25 10:43:54.343246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.869 [2024-07-25 10:43:54.343770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.869 [2024-07-25 10:43:54.343790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.869 [2024-07-25 10:43:54.343800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.869 [2024-07-25 10:43:54.343971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.869 [2024-07-25 10:43:54.344145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.869 [2024-07-25 10:43:54.344156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.869 [2024-07-25 10:43:54.344166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.869 [2024-07-25 10:43:54.346837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.869 [2024-07-25 10:43:54.356130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.869 [2024-07-25 10:43:54.356641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.869 [2024-07-25 10:43:54.356660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.869 [2024-07-25 10:43:54.356670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.869 [2024-07-25 10:43:54.356846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.869 [2024-07-25 10:43:54.357016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.869 [2024-07-25 10:43:54.357028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.869 [2024-07-25 10:43:54.357037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.869 [2024-07-25 10:43:54.359704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.869 [2024-07-25 10:43:54.369152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.869 [2024-07-25 10:43:54.369651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.869 [2024-07-25 10:43:54.369669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.869 [2024-07-25 10:43:54.369680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.869 [2024-07-25 10:43:54.369856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.869 [2024-07-25 10:43:54.370026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.869 [2024-07-25 10:43:54.370037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.869 [2024-07-25 10:43:54.370047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.869 [2024-07-25 10:43:54.372713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.869 [2024-07-25 10:43:54.382017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.869 [2024-07-25 10:43:54.382391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.869 [2024-07-25 10:43:54.382409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.869 [2024-07-25 10:43:54.382420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.869 [2024-07-25 10:43:54.382590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.869 [2024-07-25 10:43:54.382765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.869 [2024-07-25 10:43:54.382777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.869 [2024-07-25 10:43:54.382786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.869 [2024-07-25 10:43:54.385446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.869 [2024-07-25 10:43:54.394935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.869 [2024-07-25 10:43:54.395440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.869 [2024-07-25 10:43:54.395458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.869 [2024-07-25 10:43:54.395468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.869 [2024-07-25 10:43:54.395638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.869 [2024-07-25 10:43:54.395814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.869 [2024-07-25 10:43:54.395825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.869 [2024-07-25 10:43:54.395835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.869 [2024-07-25 10:43:54.398500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.869 [2024-07-25 10:43:54.407960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.869 [2024-07-25 10:43:54.408489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.869 [2024-07-25 10:43:54.408508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.869 [2024-07-25 10:43:54.408517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.869 [2024-07-25 10:43:54.408687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.869 [2024-07-25 10:43:54.408860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.869 [2024-07-25 10:43:54.408871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.869 [2024-07-25 10:43:54.408880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.869 [2024-07-25 10:43:54.411547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.869 [2024-07-25 10:43:54.420836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.869 [2024-07-25 10:43:54.421286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.869 [2024-07-25 10:43:54.421305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.869 [2024-07-25 10:43:54.421315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.869 [2024-07-25 10:43:54.421484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.869 [2024-07-25 10:43:54.421655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.869 [2024-07-25 10:43:54.421666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.869 [2024-07-25 10:43:54.421676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.869 [2024-07-25 10:43:54.424348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.869 [2024-07-25 10:43:54.433788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.869 [2024-07-25 10:43:54.434237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.869 [2024-07-25 10:43:54.434256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.869 [2024-07-25 10:43:54.434268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.869 [2024-07-25 10:43:54.434439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.869 [2024-07-25 10:43:54.434609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.869 [2024-07-25 10:43:54.434620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.869 [2024-07-25 10:43:54.434629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.869 [2024-07-25 10:43:54.437299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.869 [2024-07-25 10:43:54.446751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.869 [2024-07-25 10:43:54.447275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.869 [2024-07-25 10:43:54.447293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.869 [2024-07-25 10:43:54.447303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.869 [2024-07-25 10:43:54.447472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.869 [2024-07-25 10:43:54.447641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.869 [2024-07-25 10:43:54.447652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.869 [2024-07-25 10:43:54.447661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.869 [2024-07-25 10:43:54.450331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.869 [2024-07-25 10:43:54.459612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.869 [2024-07-25 10:43:54.460114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.869 [2024-07-25 10:43:54.460133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.869 [2024-07-25 10:43:54.460144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.869 [2024-07-25 10:43:54.460314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.869 [2024-07-25 10:43:54.460483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.869 [2024-07-25 10:43:54.460495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.869 [2024-07-25 10:43:54.460504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.869 [2024-07-25 10:43:54.463169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.869 [2024-07-25 10:43:54.472615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.870 [2024-07-25 10:43:54.473137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.870 [2024-07-25 10:43:54.473155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.870 [2024-07-25 10:43:54.473165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.870 [2024-07-25 10:43:54.473334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.870 [2024-07-25 10:43:54.473504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.870 [2024-07-25 10:43:54.473520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.870 [2024-07-25 10:43:54.473530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.870 [2024-07-25 10:43:54.476206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.870 [2024-07-25 10:43:54.485494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.870 [2024-07-25 10:43:54.485989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.870 [2024-07-25 10:43:54.486008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.870 [2024-07-25 10:43:54.486019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.870 [2024-07-25 10:43:54.486189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.870 [2024-07-25 10:43:54.486359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.870 [2024-07-25 10:43:54.486370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.870 [2024-07-25 10:43:54.486379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.870 [2024-07-25 10:43:54.489050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.870 [2024-07-25 10:43:54.498517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.870 [2024-07-25 10:43:54.499026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.870 [2024-07-25 10:43:54.499045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.870 [2024-07-25 10:43:54.499055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.870 [2024-07-25 10:43:54.499226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.870 [2024-07-25 10:43:54.499396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.870 [2024-07-25 10:43:54.499407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.870 [2024-07-25 10:43:54.499416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.870 [2024-07-25 10:43:54.502084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.870 [2024-07-25 10:43:54.511523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.870 [2024-07-25 10:43:54.512046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.870 [2024-07-25 10:43:54.512065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.870 [2024-07-25 10:43:54.512075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.870 [2024-07-25 10:43:54.512245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.870 [2024-07-25 10:43:54.512416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.870 [2024-07-25 10:43:54.512427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.870 [2024-07-25 10:43:54.512437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.870 [2024-07-25 10:43:54.515106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.870 [2024-07-25 10:43:54.524389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.870 [2024-07-25 10:43:54.524912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.870 [2024-07-25 10:43:54.524930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.870 [2024-07-25 10:43:54.524939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.870 [2024-07-25 10:43:54.525110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.870 [2024-07-25 10:43:54.525279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.870 [2024-07-25 10:43:54.525290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.870 [2024-07-25 10:43:54.525299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.870 [2024-07-25 10:43:54.527971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.870 [2024-07-25 10:43:54.537412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.870 [2024-07-25 10:43:54.537933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.870 [2024-07-25 10:43:54.537951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.870 [2024-07-25 10:43:54.537961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.870 [2024-07-25 10:43:54.538130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.870 [2024-07-25 10:43:54.538299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.870 [2024-07-25 10:43:54.538310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.870 [2024-07-25 10:43:54.538319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.870 [2024-07-25 10:43:54.540989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.870 [2024-07-25 10:43:54.550278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.870 [2024-07-25 10:43:54.550797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.870 [2024-07-25 10:43:54.550815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.870 [2024-07-25 10:43:54.550824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.870 [2024-07-25 10:43:54.550994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.870 [2024-07-25 10:43:54.551165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.870 [2024-07-25 10:43:54.551175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.870 [2024-07-25 10:43:54.551184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.870 [2024-07-25 10:43:54.553854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.870 [2024-07-25 10:43:54.563297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.870 [2024-07-25 10:43:54.563817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.870 [2024-07-25 10:43:54.563835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:50.870 [2024-07-25 10:43:54.563845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:50.870 [2024-07-25 10:43:54.564018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:50.870 [2024-07-25 10:43:54.564188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.870 [2024-07-25 10:43:54.564199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.870 [2024-07-25 10:43:54.564208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.870 [2024-07-25 10:43:54.566883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.130 [2024-07-25 10:43:54.576177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.130 [2024-07-25 10:43:54.576724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.130 [2024-07-25 10:43:54.576742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.130 [2024-07-25 10:43:54.576752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.130 [2024-07-25 10:43:54.576922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.130 [2024-07-25 10:43:54.577091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.130 [2024-07-25 10:43:54.577103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.130 [2024-07-25 10:43:54.577112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.130 [2024-07-25 10:43:54.579784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.130 [2024-07-25 10:43:54.589063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.130 [2024-07-25 10:43:54.589577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.130 [2024-07-25 10:43:54.589595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.130 [2024-07-25 10:43:54.589605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.130 [2024-07-25 10:43:54.589781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.130 [2024-07-25 10:43:54.589951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.130 [2024-07-25 10:43:54.589963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.130 [2024-07-25 10:43:54.589972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.130 [2024-07-25 10:43:54.592635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.130 [2024-07-25 10:43:54.602084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.130 [2024-07-25 10:43:54.602601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.130 [2024-07-25 10:43:54.602619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.130 [2024-07-25 10:43:54.602629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.130 [2024-07-25 10:43:54.602804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.130 [2024-07-25 10:43:54.602975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.130 [2024-07-25 10:43:54.602986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.130 [2024-07-25 10:43:54.602999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.130 [2024-07-25 10:43:54.605664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.130 [2024-07-25 10:43:54.615098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.130 [2024-07-25 10:43:54.615614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.130 [2024-07-25 10:43:54.615632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.130 [2024-07-25 10:43:54.615642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.130 [2024-07-25 10:43:54.615817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.130 [2024-07-25 10:43:54.615988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.130 [2024-07-25 10:43:54.615999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.130 [2024-07-25 10:43:54.616008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.130 [2024-07-25 10:43:54.618677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.130 [2024-07-25 10:43:54.628111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.130 [2024-07-25 10:43:54.628629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.130 [2024-07-25 10:43:54.628647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.130 [2024-07-25 10:43:54.628657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.130 [2024-07-25 10:43:54.628832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.130 [2024-07-25 10:43:54.629002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.130 [2024-07-25 10:43:54.629014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.130 [2024-07-25 10:43:54.629023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.130 [2024-07-25 10:43:54.631682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.130 [2024-07-25 10:43:54.641123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.130 [2024-07-25 10:43:54.641642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.130 [2024-07-25 10:43:54.641660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.130 [2024-07-25 10:43:54.641670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.130 [2024-07-25 10:43:54.641846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.130 [2024-07-25 10:43:54.642016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.130 [2024-07-25 10:43:54.642027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.130 [2024-07-25 10:43:54.642036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.130 [2024-07-25 10:43:54.644712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.130 [2024-07-25 10:43:54.653999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.130 [2024-07-25 10:43:54.654516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.130 [2024-07-25 10:43:54.654537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.130 [2024-07-25 10:43:54.654547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.130 [2024-07-25 10:43:54.654720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.130 [2024-07-25 10:43:54.654890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.130 [2024-07-25 10:43:54.654901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.130 [2024-07-25 10:43:54.654911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.130 [2024-07-25 10:43:54.657572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.130 [2024-07-25 10:43:54.667009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.130 [2024-07-25 10:43:54.667523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.130 [2024-07-25 10:43:54.667542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.130 [2024-07-25 10:43:54.667551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.130 [2024-07-25 10:43:54.667724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.130 [2024-07-25 10:43:54.667895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.130 [2024-07-25 10:43:54.667905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.130 [2024-07-25 10:43:54.667914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.130 [2024-07-25 10:43:54.670574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.131 [2024-07-25 10:43:54.680027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.131 [2024-07-25 10:43:54.680471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.131 [2024-07-25 10:43:54.680490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.131 [2024-07-25 10:43:54.680500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.131 [2024-07-25 10:43:54.680670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.131 [2024-07-25 10:43:54.680844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.131 [2024-07-25 10:43:54.680856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.131 [2024-07-25 10:43:54.680865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.131 [2024-07-25 10:43:54.683532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.131 [2024-07-25 10:43:54.692973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.131 [2024-07-25 10:43:54.693490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.131 [2024-07-25 10:43:54.693509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.131 [2024-07-25 10:43:54.693519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.131 [2024-07-25 10:43:54.693689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.131 [2024-07-25 10:43:54.693867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.131 [2024-07-25 10:43:54.693879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.131 [2024-07-25 10:43:54.693888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.131 [2024-07-25 10:43:54.696553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.131 [2024-07-25 10:43:54.705873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.131 [2024-07-25 10:43:54.706331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.131 [2024-07-25 10:43:54.706350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.131 [2024-07-25 10:43:54.706360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.131 [2024-07-25 10:43:54.706533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.131 [2024-07-25 10:43:54.706704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.131 [2024-07-25 10:43:54.706720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.131 [2024-07-25 10:43:54.706730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.131 [2024-07-25 10:43:54.709395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.131 [2024-07-25 10:43:54.718848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.131 [2024-07-25 10:43:54.719373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.131 [2024-07-25 10:43:54.719392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.131 [2024-07-25 10:43:54.719402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.131 [2024-07-25 10:43:54.719571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.131 [2024-07-25 10:43:54.719746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.131 [2024-07-25 10:43:54.719758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.131 [2024-07-25 10:43:54.719767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.131 [2024-07-25 10:43:54.722432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.131 [2024-07-25 10:43:54.731711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.131 [2024-07-25 10:43:54.732239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.131 [2024-07-25 10:43:54.732257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.131 [2024-07-25 10:43:54.732267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.131 [2024-07-25 10:43:54.732438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.131 [2024-07-25 10:43:54.732607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.131 [2024-07-25 10:43:54.732619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.131 [2024-07-25 10:43:54.732628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.131 [2024-07-25 10:43:54.735302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.131 [2024-07-25 10:43:54.744593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.131 [2024-07-25 10:43:54.745098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.131 [2024-07-25 10:43:54.745117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.131 [2024-07-25 10:43:54.745127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.131 [2024-07-25 10:43:54.745296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.131 [2024-07-25 10:43:54.745466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.131 [2024-07-25 10:43:54.745477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.131 [2024-07-25 10:43:54.745486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.131 [2024-07-25 10:43:54.748157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.131 [2024-07-25 10:43:54.757602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.131 [2024-07-25 10:43:54.758129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.131 [2024-07-25 10:43:54.758147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.131 [2024-07-25 10:43:54.758157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.131 [2024-07-25 10:43:54.758327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.131 [2024-07-25 10:43:54.758496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.131 [2024-07-25 10:43:54.758507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.131 [2024-07-25 10:43:54.758517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.131 [2024-07-25 10:43:54.761190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.131 [2024-07-25 10:43:54.770472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.131 [2024-07-25 10:43:54.770975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.131 [2024-07-25 10:43:54.770994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.131 [2024-07-25 10:43:54.771004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.131 [2024-07-25 10:43:54.771173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.131 [2024-07-25 10:43:54.771343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.131 [2024-07-25 10:43:54.771354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.131 [2024-07-25 10:43:54.771364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.131 [2024-07-25 10:43:54.774070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.131 [2024-07-25 10:43:54.783376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.131 [2024-07-25 10:43:54.783922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.131 [2024-07-25 10:43:54.783941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.131 [2024-07-25 10:43:54.783955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.131 [2024-07-25 10:43:54.784125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.131 [2024-07-25 10:43:54.784296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.131 [2024-07-25 10:43:54.784308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.131 [2024-07-25 10:43:54.784317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.131 [2024-07-25 10:43:54.786987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.131 [2024-07-25 10:43:54.796273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.131 [2024-07-25 10:43:54.796691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.131 [2024-07-25 10:43:54.796709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.131 [2024-07-25 10:43:54.796723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.131 [2024-07-25 10:43:54.796893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.131 [2024-07-25 10:43:54.797063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.132 [2024-07-25 10:43:54.797075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.132 [2024-07-25 10:43:54.797084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.132 [2024-07-25 10:43:54.799752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.132 [2024-07-25 10:43:54.809199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.132 [2024-07-25 10:43:54.809722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.132 [2024-07-25 10:43:54.809741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.132 [2024-07-25 10:43:54.809751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.132 [2024-07-25 10:43:54.809922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.132 [2024-07-25 10:43:54.810093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.132 [2024-07-25 10:43:54.810104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.132 [2024-07-25 10:43:54.810114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.132 [2024-07-25 10:43:54.812783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.132 [2024-07-25 10:43:54.822076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.132 [2024-07-25 10:43:54.822589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.132 [2024-07-25 10:43:54.822607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.132 [2024-07-25 10:43:54.822618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.132 [2024-07-25 10:43:54.822794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.132 [2024-07-25 10:43:54.822964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.132 [2024-07-25 10:43:54.822978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.132 [2024-07-25 10:43:54.822987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.132 [2024-07-25 10:43:54.825654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.392 [2024-07-25 10:43:54.834947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.392 [2024-07-25 10:43:54.835465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.392 [2024-07-25 10:43:54.835483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.392 [2024-07-25 10:43:54.835493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.392 [2024-07-25 10:43:54.835664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.392 [2024-07-25 10:43:54.835839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.392 [2024-07-25 10:43:54.835851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.392 [2024-07-25 10:43:54.835859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.392 [2024-07-25 10:43:54.838525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.392 [2024-07-25 10:43:54.847877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.392 [2024-07-25 10:43:54.848402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.392 [2024-07-25 10:43:54.848421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.392 [2024-07-25 10:43:54.848431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.392 [2024-07-25 10:43:54.848600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.392 [2024-07-25 10:43:54.848774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.392 [2024-07-25 10:43:54.848785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.392 [2024-07-25 10:43:54.848794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.392 [2024-07-25 10:43:54.851461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.392 [2024-07-25 10:43:54.860904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.392 [2024-07-25 10:43:54.861424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.392 [2024-07-25 10:43:54.861442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.392 [2024-07-25 10:43:54.861452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.392 [2024-07-25 10:43:54.861621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.392 [2024-07-25 10:43:54.861796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.392 [2024-07-25 10:43:54.861807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.392 [2024-07-25 10:43:54.861816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.392 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:51.392 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:51.392 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:51.392 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:51.392 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.392 [2024-07-25 10:43:54.864482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.392 [2024-07-25 10:43:54.873775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.392 [2024-07-25 10:43:54.874223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.392 [2024-07-25 10:43:54.874242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.392 [2024-07-25 10:43:54.874252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.392 [2024-07-25 10:43:54.874422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.392 [2024-07-25 10:43:54.874593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.392 [2024-07-25 10:43:54.874604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.392 [2024-07-25 10:43:54.874613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.392 [2024-07-25 10:43:54.877296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.392 [2024-07-25 10:43:54.886753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.392 [2024-07-25 10:43:54.887123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.392 [2024-07-25 10:43:54.887142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.392 [2024-07-25 10:43:54.887152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.392 [2024-07-25 10:43:54.887322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.392 [2024-07-25 10:43:54.887492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.392 [2024-07-25 10:43:54.887503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.392 [2024-07-25 10:43:54.887512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.392 [2024-07-25 10:43:54.890184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.392 [2024-07-25 10:43:54.899631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.392 [2024-07-25 10:43:54.900089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.392 [2024-07-25 10:43:54.900108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.392 [2024-07-25 10:43:54.900117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.392 [2024-07-25 10:43:54.900288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.392 [2024-07-25 10:43:54.900458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.392 [2024-07-25 10:43:54.900469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.392 [2024-07-25 10:43:54.900479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.392 [2024-07-25 10:43:54.903324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.392 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.392 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:51.392 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.392 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.392 [2024-07-25 10:43:54.912648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.392 [2024-07-25 10:43:54.913111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.392 [2024-07-25 10:43:54.913130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.392 [2024-07-25 10:43:54.913140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.392 [2024-07-25 10:43:54.913310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.392 [2024-07-25 10:43:54.913481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.392 [2024-07-25 10:43:54.913493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.392 [2024-07-25 10:43:54.913502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.392 [2024-07-25 10:43:54.913547] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.392 [2024-07-25 10:43:54.916176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.392 [2024-07-25 10:43:54.925615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.392 [2024-07-25 10:43:54.926091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.392 [2024-07-25 10:43:54.926110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.392 [2024-07-25 10:43:54.926120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.392 [2024-07-25 10:43:54.926290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.393 [2024-07-25 10:43:54.926461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.393 [2024-07-25 10:43:54.926472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.393 [2024-07-25 10:43:54.926481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.393 [2024-07-25 10:43:54.929151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.393 [2024-07-25 10:43:54.938598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.393 [2024-07-25 10:43:54.939095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.393 [2024-07-25 10:43:54.939114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.393 [2024-07-25 10:43:54.939124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.393 [2024-07-25 10:43:54.939293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.393 [2024-07-25 10:43:54.939467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.393 [2024-07-25 10:43:54.939479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.393 [2024-07-25 10:43:54.939488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.393 [2024-07-25 10:43:54.942160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.393 [2024-07-25 10:43:54.951624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.393 [2024-07-25 10:43:54.952165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.393 [2024-07-25 10:43:54.952190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.393 [2024-07-25 10:43:54.952201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.393 [2024-07-25 10:43:54.952377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.393 [2024-07-25 10:43:54.952548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.393 [2024-07-25 10:43:54.952559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.393 [2024-07-25 10:43:54.952569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.393 [2024-07-25 10:43:54.955238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.393 Malloc0 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.393 [2024-07-25 10:43:54.964527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.393 [2024-07-25 10:43:54.965035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.393 [2024-07-25 10:43:54.965054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.393 [2024-07-25 10:43:54.965064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.393 [2024-07-25 10:43:54.965235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.393 [2024-07-25 10:43:54.965404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.393 [2024-07-25 10:43:54.965416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.393 [2024-07-25 10:43:54.965425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.393 [2024-07-25 10:43:54.968096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.393 [2024-07-25 10:43:54.977551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.393 [2024-07-25 10:43:54.978057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.393 [2024-07-25 10:43:54.978076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7ba70 with addr=10.0.0.2, port=4420 00:28:51.393 [2024-07-25 10:43:54.978085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7ba70 is same with the state(5) to be set 00:28:51.393 [2024-07-25 10:43:54.978255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ba70 (9): Bad file descriptor 00:28:51.393 [2024-07-25 10:43:54.978425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.393 [2024-07-25 10:43:54.978436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.393 [2024-07-25 10:43:54.978445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.393 [2024-07-25 10:43:54.979408] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.393 [2024-07-25 10:43:54.981111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.393 10:43:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 4049215 00:28:51.393 [2024-07-25 10:43:54.990553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.652 [2024-07-25 10:43:55.151134] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:59.803 00:28:59.803 Latency(us) 00:28:59.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.803 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:59.803 Verification LBA range: start 0x0 length 0x4000 00:28:59.803 Nvme1n1 : 15.01 8742.86 34.15 13833.56 0.00 5650.84 815.92 19084.08 00:28:59.803 =================================================================================================================== 00:28:59.803 Total : 8742.86 34.15 13833.56 0.00 5650.84 815.92 19084.08 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:00.063 rmmod nvme_tcp 00:29:00.063 rmmod nvme_fabrics 00:29:00.063 rmmod nvme_keyring 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 4050210 ']' 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 4050210 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 4050210 ']' 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 4050210 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4050210 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4050210' 00:29:00.063 killing process with pid 4050210 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 4050210 00:29:00.063 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 4050210 00:29:00.321 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:00.321 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:00.321 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:00.321 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:00.321 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:00.321 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.321 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.321 10:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.855 10:44:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:02.855 00:29:02.855 real 0m27.755s 00:29:02.855 user 1m2.240s 00:29:02.855 sys 0m8.304s 00:29:02.855 10:44:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:02.855 10:44:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:02.855 ************************************ 00:29:02.855 END TEST nvmf_bdevperf 00:29:02.855 ************************************ 00:29:02.855 10:44:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:02.855 10:44:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:02.855 10:44:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:02.855 10:44:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.855 ************************************ 00:29:02.855 START TEST nvmf_target_disconnect 00:29:02.855 ************************************ 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:02.856 * Looking for test storage... 00:29:02.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:02.856 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.424 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:09.425 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:09.425 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:09.425 Found net devices under 0000:af:00.0: cvl_0_0 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:09.425 Found net devices under 0000:af:00.1: cvl_0_1 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:09.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:29:09.425 00:29:09.425 --- 10.0.0.2 ping statistics --- 00:29:09.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.425 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:29:09.425 00:29:09.425 --- 10.0.0.1 ping statistics --- 00:29:09.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.425 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:09.425 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:09.426 ************************************ 00:29:09.426 START TEST nvmf_target_disconnect_tc1 00:29:09.426 ************************************ 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:09.426 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.426 [2024-07-25 10:44:12.556122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-25 10:44:12.556245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3140 with addr=10.0.0.2, port=4420 00:29:09.426 [2024-07-25 10:44:12.556314] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:09.426 [2024-07-25 10:44:12.556359] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:09.426 [2024-07-25 10:44:12.556386] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:09.426 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:09.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:09.426 Initializing NVMe Controllers 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:09.426 00:29:09.426 real 0m0.117s 00:29:09.426 user 0m0.053s 00:29:09.426 sys 0m0.063s 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:09.426 ************************************ 00:29:09.426 END TEST nvmf_target_disconnect_tc1 00:29:09.426 ************************************ 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:09.426 ************************************ 00:29:09.426 START TEST nvmf_target_disconnect_tc2 00:29:09.426 ************************************ 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4055312 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4055312 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 4055312 ']' 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:09.426 10:44:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.426 [2024-07-25 10:44:12.706452] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:29:09.427 [2024-07-25 10:44:12.706496] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.427 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.427 [2024-07-25 10:44:12.794648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.427 [2024-07-25 10:44:12.866202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.427 [2024-07-25 10:44:12.866241] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.427 [2024-07-25 10:44:12.866250] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.427 [2024-07-25 10:44:12.866258] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.427 [2024-07-25 10:44:12.866265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.427 [2024-07-25 10:44:12.866389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:09.427 [2024-07-25 10:44:12.866498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:09.427 [2024-07-25 10:44:12.866608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:09.427 [2024-07-25 10:44:12.866609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.995 Malloc0 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.995 [2024-07-25 10:44:13.580007] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.995 [2024-07-25 10:44:13.608240] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=4055590 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:09.995 10:44:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:09.995 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.551 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 4055312 00:29:12.551 10:44:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 [2024-07-25 10:44:15.635798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Write completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.551 starting I/O failed 00:29:12.551 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 [2024-07-25 10:44:15.636036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 [2024-07-25 10:44:15.636264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Read completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 Write completed with error (sct=0, sc=8) 00:29:12.552 starting I/O failed 00:29:12.552 [2024-07-25 10:44:15.636489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.552 [2024-07-25 10:44:15.636700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.552 [2024-07-25 10:44:15.636724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.552 qpair failed and we were unable to recover it. 00:29:12.552 [2024-07-25 10:44:15.637064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.552 [2024-07-25 10:44:15.637108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.552 qpair failed and we were unable to recover it. 00:29:12.552 [2024-07-25 10:44:15.637488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.552 [2024-07-25 10:44:15.637530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.552 qpair failed and we were unable to recover it. 00:29:12.552 [2024-07-25 10:44:15.637860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.552 [2024-07-25 10:44:15.637903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.552 qpair failed and we were unable to recover it. 00:29:12.552 [2024-07-25 10:44:15.638151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.552 [2024-07-25 10:44:15.638192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.552 qpair failed and we were unable to recover it. 00:29:12.552 [2024-07-25 10:44:15.638497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.552 [2024-07-25 10:44:15.638537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.552 qpair failed and we were unable to recover it. 00:29:12.552 [2024-07-25 10:44:15.638797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.552 [2024-07-25 10:44:15.638811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.552 qpair failed and we were unable to recover it. 00:29:12.552 [2024-07-25 10:44:15.639117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.552 [2024-07-25 10:44:15.639157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.552 qpair failed and we were unable to recover it. 00:29:12.552 [2024-07-25 10:44:15.639490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.552 [2024-07-25 10:44:15.639530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.552 qpair failed and we were unable to recover it. 00:29:12.552 [2024-07-25 10:44:15.639844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.552 [2024-07-25 10:44:15.639886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.552 qpair failed and we were unable to recover it. 00:29:12.552 [2024-07-25 10:44:15.640199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.552 [2024-07-25 10:44:15.640240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.552 qpair failed and we were unable to recover it. 00:29:12.552 [2024-07-25 10:44:15.640596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.552 [2024-07-25 10:44:15.640653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.552 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.640930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.640974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.641359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.641400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.641738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.641780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.642175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.642215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.642598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.642638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.643012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.643054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.643460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.643500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.643894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.643935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.644269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.644309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.644674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.644725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.644986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.645026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.645364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.645405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.645774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.645816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.646135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.646175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.646495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.646535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.646866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.646884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.647145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.647163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.647516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.647533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.647844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.647862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.648168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.648186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.648394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.648411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.648651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.648669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.648934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.648952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.649222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.649240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.649484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.649501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.649830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.649848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.650080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.650116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.650473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.650503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.650838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.650884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.651201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.651242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.651552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.651593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.651971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.652015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.652310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.652351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.652618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.652666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.652960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.652973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.653273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.653315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.653561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.553 [2024-07-25 10:44:15.653574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-07-25 10:44:15.653898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.653942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.654301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.654341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.654736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.654778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.655096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.655155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.655403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.655443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.655767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.655808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.656108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.656148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.656512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.656552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.656903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.656917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.657146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.657159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.657398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.657411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.657755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.657797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.658099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.658139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.658522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.658563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.658852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.658894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.659202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.659243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.659607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.659647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.660048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.660090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.660454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.660495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.660789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.660802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.661043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.661077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.661472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.661513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.661902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.661944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.662321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.662361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.662743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.662786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.663147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.663189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.663549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.663589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.663975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.664016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.664346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.664387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.664748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.664795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.665180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.665220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.665598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.665639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.665969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.666011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.666370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.666410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.666789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.666831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.667212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.667253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.667635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.554 [2024-07-25 10:44:15.667674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-07-25 10:44:15.668065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.668106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.668343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.668384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.668761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.668802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.669182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.669223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.669558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.669599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.669896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.669909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.670150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.670192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.670572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.670613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.670951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.670993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.671373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.671414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.671630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.671642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.671970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.672011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.672375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.672416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.672798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.672840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.673221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.673261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.673646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.673687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.674057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.674099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.674448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.674488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.674820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.674862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.675248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.675289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.675586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.675625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.675956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.675969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.676293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.676333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.676693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.676772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.677015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.677027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.677350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.677362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.677686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.677737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.678052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.678093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.678470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.678511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.678893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.678934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.679315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.679356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.679659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.679700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.680012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.680027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.680355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.680395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.680754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.680795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.681071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.681084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.681399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.555 [2024-07-25 10:44:15.681442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-07-25 10:44:15.681860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.681903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.682090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.682104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.682291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.682332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.682688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.682740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.683106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.683147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.683441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.683482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.683796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.683837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.684222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.684262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.684627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.684668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.685002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.685015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.685307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.685321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.685569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.685583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.685842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.685855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.686093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.686106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.686473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.686513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.686822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.686864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.687227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.687267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.687645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.687686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.688012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.688025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.688253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.688266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.688506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.688538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.688846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.688887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.689197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.689239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.689526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.689567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.689951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.689993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.690321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.690361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.690743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.690785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.691166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.691206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.691539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.691580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.691913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.556 [2024-07-25 10:44:15.691955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.556 qpair failed and we were unable to recover it. 00:29:12.556 [2024-07-25 10:44:15.692327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.692368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.692758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.692801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.693167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.693207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.693588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.693629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.693934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.693976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.694224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.694270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.694656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.694696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.695068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.695109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.695507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.695546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.695932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.695974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.696354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.696395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.696775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.696817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.697119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.697161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.697544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.697584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.697894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.697936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.698296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.698336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.698700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.698753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.699135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.699177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.699544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.699585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.699928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.699941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.700163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.700177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.700492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.700533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.700822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.700864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.701219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.701232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.701474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.701500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.701892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.701934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.702244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.702284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.702642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.702682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.702985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.703025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.703350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.703391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.703772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.703814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.704149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.704190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.704580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.704621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.704991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.705033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.705326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.705366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.705602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.705643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.706027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.706069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.557 qpair failed and we were unable to recover it. 00:29:12.557 [2024-07-25 10:44:15.706473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.557 [2024-07-25 10:44:15.706510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.706858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.706900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.707212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.707252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.707634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.707675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.708003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.708045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.708427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.708467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.708760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.708802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.709184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.709225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.709538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.709583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.709967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.710009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.710386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.710427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.710728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.710770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.711156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.711197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.711556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.711597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.711986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.712028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.712331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.712371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.712762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.712776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.713102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.713144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.713433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.713473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.713857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.713898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.714269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.714310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.714689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.714750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.715118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.715159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.715487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.715527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.715878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.715913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.716294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.716334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.716723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.716765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.717094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.717135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.717520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.717561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.717890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.717951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.718327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.718368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.718747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.718788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.719177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.719217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.719600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.719641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.719972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.719986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.720317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.720358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.720746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.558 [2024-07-25 10:44:15.720788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.558 qpair failed and we were unable to recover it. 00:29:12.558 [2024-07-25 10:44:15.721086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.721128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.721441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.721481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.721781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.721823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.722215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.722229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.722485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.722497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.722762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.722804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.723167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.723208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.723539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.723581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.723880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.723894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.724232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.724273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.724604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.724646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.725023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.725038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.725341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.725353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.725678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.725729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.726093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.726133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.726543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.726584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.726936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.726949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.727271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.727311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.727661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.727701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.728095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.728137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.728521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.728561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.728869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.728882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.729233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.729274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.729600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.729641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.730032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.730074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.730400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.730441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.730751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.730765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.731094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.731134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.731518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.731558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.731876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.731918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.732240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.732253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.732592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.732632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.559 [2024-07-25 10:44:15.732948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.559 [2024-07-25 10:44:15.732991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.559 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.733282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.733322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.733710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.733760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.734144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.734186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.734546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.734585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.734955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.734998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.735388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.735429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.735736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.735779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.736168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.736209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.736593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.736634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.736999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.737041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.737431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.737471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.737751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.737765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.738041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.738082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.738469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.738509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.738827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.738841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.739092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.739106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.739435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.739474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.739774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.739816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.740202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.740249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.740634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.740674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.741059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.741073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.741408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.741448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.741776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.741818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.742182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.742223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.742617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.742658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.743040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.743078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.743329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.743342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.743692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.743747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.744134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.744175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.744566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.744607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.744979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.745021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.745410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.745451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.745853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.745895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.746283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.746324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.746712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.746764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.747059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.747072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.560 qpair failed and we were unable to recover it. 00:29:12.560 [2024-07-25 10:44:15.747429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.560 [2024-07-25 10:44:15.747469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.747850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.747893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.748274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.748286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.748554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.748568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.748816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.748853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.749164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.749205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.749592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.749634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.750037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.750079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.750391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.750432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.750745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.750789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.751097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.751138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.751454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.751495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.751796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.751839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.752224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.752265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.752636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.752677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.752994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.753036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.753420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.753461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.753849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.753891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.754287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.754328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.754662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.754702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.755121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.755163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.755527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.755567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.755956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.756004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.756373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.756414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.756781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.756823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.757133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.757174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.757561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.757602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.757962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.561 [2024-07-25 10:44:15.757976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.561 qpair failed and we were unable to recover it. 00:29:12.561 [2024-07-25 10:44:15.758312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.758353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.758687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.758751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.759155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.759196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.759494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.759534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.759926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.759968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.760339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.760380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.760713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.760764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.760970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.760984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.761345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.761387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.761791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.761834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.762223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.762264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.762676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.762740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.763097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.763138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.763528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.763569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.763865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.763907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.764221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.764262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.764592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.764632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.765053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.765095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.765325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.765338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.765680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.765746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.766137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.766179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.766595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.766637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.767045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.767087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.767476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.767516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.767903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.767947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.768307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.768320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.768631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.768672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.769071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.769113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.769470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.769483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.769718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.769732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.770085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.770127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.770493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.770534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.770924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.770968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.771354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.771395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.771637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.771684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.772072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.772115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-07-25 10:44:15.772504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-07-25 10:44:15.772545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.772934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.772976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.773367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.773408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.773727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.773769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.774164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.774206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.774594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.774635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.775049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.775064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.775314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.775328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.775685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.775744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.776002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.776044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.776407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.776421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.776747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.776762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.777122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.777138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.777477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.777491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.777771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.777786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.778120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.778135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.778409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.778450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.778839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.778853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.779114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.779156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.779551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.779592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.779983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.780025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.780409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.780423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.780763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.780778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.781140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.781182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.781576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.781617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.782016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.782059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.782433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.782448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.782785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.782800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.783119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.783161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.783554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.783596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.783907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.783922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.784200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.784214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.784554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.784595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.784983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.785025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-07-25 10:44:15.785420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-07-25 10:44:15.785460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.785852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.785893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.786297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.786338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.786731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.786774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.787081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.787128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.787460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.787501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.787897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.787940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.788284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.788325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.788741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.788784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.789183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.789224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.789595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.789636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.789899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.789941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.790328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.790369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.790694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.790747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.791104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.791146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.791467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.791509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.791836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.791878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.792274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.792315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.792728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.792770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.793166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.793210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.793504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.793546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.793945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.793988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.794403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.794445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.794832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.794875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.795261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.795302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.795626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.795668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.796099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.796142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.796463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.796504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.796901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.796943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.797337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.797352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.797700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.797755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.798156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.798198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.798526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.798568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.798922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.798965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.799361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.799410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.799825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.799867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-07-25 10:44:15.800216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-07-25 10:44:15.800258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.800669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.800711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.801115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.801157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.801530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.801570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.801955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.801998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.802223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.802239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.802587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.802629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.803045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.803087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.803414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.803461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.803839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.803882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.804276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.804317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.804713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.804768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.805110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.805152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.805552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.805593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.805887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.805902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.806224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.806265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.806610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.806652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.806963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.806977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.807270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.807312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.807687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.807754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.808154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.808196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.808588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.808629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.809020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.809058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.809345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.809385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.809778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.809820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.810162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.810203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.810516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.810557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.810942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.810984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.811231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.811245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.811535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.811578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.811952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.811995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.812368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.812409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.812736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.812785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.813102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.813144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.813451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-07-25 10:44:15.813492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-07-25 10:44:15.813890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.813933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.814337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.814379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.814728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.814770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.815116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.815130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.815446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.815460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.815803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.815846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.816237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.816251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.816598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.816640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.817095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.817139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.817407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.817421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.817746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.817789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.818183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.818225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.818634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.818676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.819055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.819103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.819479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.819520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.819774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.819817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.820211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.820253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.820652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.820695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.821007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.821050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.821426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.821467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.821862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.821924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.822267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.822309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.822708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.822760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.823152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.823193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.823493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.823535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.823937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.823979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.824307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.824321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.824653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.824668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.824990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.825032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.825383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.825424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-07-25 10:44:15.825729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-07-25 10:44:15.825773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.826177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.826218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.826564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.826605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.826925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.826968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.827267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.827308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.827709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.827769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.828068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.828109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.828409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.828452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.828793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.828836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.829138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.829178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.829510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.829524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.829813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.829856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.830258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.830300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.830685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.830738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.831111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.831153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.831469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.831512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.831905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.831947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.832250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.832293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.832618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.832660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.832989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.833032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.833426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.833467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.833861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.833904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.834304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.834346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.834738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.834787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.835185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.835228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.835542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.835556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.835913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.835955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.836330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.836372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.836703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.836755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.837151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.837193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.837588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.837630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.838006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.838021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.838307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.838349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.838672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.838731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.839146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.839188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-07-25 10:44:15.839572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-07-25 10:44:15.839614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.839937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.839980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.840379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.840421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.840796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.840839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.841239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.841284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.841606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.841651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.842007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.842050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.842446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.842488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.842813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.842857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.843246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.843287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.843598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.843640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.843937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.843979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.844345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.844359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.844609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.844658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.845058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.845102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.845433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.845474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.845890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.845933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.846300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.846355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.846747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.846792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.847108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.847149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.847417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.847459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.847838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.847880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.848199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.848241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.848546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.848587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.848981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.848996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.849275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.849291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.849512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.849527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.849866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.849909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.850205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.850250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.850577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.850618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.850918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.850962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.851201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.851217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.851559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.851601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.851830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.851873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.852124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.852138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.852400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-07-25 10:44:15.852442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-07-25 10:44:15.852746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.852788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.853112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.853154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.853576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.853618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.853973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.854016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.854283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.854324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.854688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.854751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.855129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.855171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.855520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.855561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.855970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.856013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.856410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.856452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.856848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.856890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.857227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.857268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.857583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.857624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.858022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.858064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.858338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.858379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.858767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.858812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.859211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.859252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.859642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.859684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.860093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.860135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.860550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.860597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.860925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.860969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.861366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.861407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.861758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.861801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.862074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.862115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.862502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.862543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.862937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.862980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.863340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.863381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.863779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.863822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.864226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.864268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.864580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.864621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.864960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.865002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.865327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.865368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.865771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.865814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.866053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.866067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.866351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.866392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.866767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-07-25 10:44:15.866811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-07-25 10:44:15.867137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.867178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.867586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.867627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.867999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.868040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.868424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.868465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.868862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.868905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.869286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.869327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.869649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.869691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.870129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.870172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.870555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.870596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.870926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.870969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.871394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.871436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.871838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.871880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.872244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.872286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.872607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.872648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.873008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.873050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.873401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.873442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.873771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.873814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.874194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.874235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.874647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.874689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.875030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.875066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.875404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.875446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.875764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.875807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.876134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.876175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.876531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.876577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.876981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.877025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.877350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.877392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.877768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.877810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.878206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.878248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.878525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.878566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.878959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.879002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.879337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.879378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.879781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.879824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-07-25 10:44:15.880148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-07-25 10:44:15.880189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.880561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.880603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.880996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.881038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.881436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.881477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.881802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.881845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.882246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.882288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.882687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.882737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.883115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.883156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.883494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.883536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.883838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.883880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.884273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.884314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.884711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.884776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.885174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.885214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.885578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.885620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.886016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.886060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.886457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.886497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.886842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.886884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.887267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.887308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.887705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.887759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.888135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.888176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.888558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.888599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.888921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.888964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.889364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.889406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.889745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.889787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.890157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.890198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.890513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.890528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.890847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.890889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.891288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.891330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.891638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.891653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.891969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.891984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.892296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.892321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.892675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.892691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.893008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.893051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-07-25 10:44:15.893449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-07-25 10:44:15.893491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.893788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.893832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.894156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.894198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.894541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.894582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.894928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.894971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.895292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.895333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.895737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.895779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.896129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.896172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.896496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.896539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.896937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.896980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.897284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.897314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.897597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.897612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.897890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.897905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.898214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.898256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.898558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.898601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.898912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.898955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.899256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.899301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.899696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.899746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.900052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.900094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.900503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.900545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.900952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.900989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.901240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.901281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.901607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.901650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.902103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.902147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.902481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.902523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.902863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.902907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.903326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.903368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.903596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.903611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.903892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.903907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.904243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.904258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.904464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.904491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.904765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.904780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.572 qpair failed and we were unable to recover it. 00:29:12.572 [2024-07-25 10:44:15.905102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.572 [2024-07-25 10:44:15.905143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.905543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.905585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.906012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.906054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.906356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.906371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.906605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.906620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.906861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.906877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.907074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.907091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.907377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.907392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.907659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.907673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.907995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.908010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.908264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.908300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.908625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.908665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.908962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.909005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.909293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.909334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.909653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.909694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.910124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.910167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.910466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.910506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.910884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.910926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.911261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.911300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.911613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.911651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.911974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.912015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.912403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.912441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.912835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.912874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.913278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.913316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.913647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.913685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.913998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.914037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.914289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.914328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.573 [2024-07-25 10:44:15.914703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.573 [2024-07-25 10:44:15.914755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.573 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.915067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.915106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.915466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.915507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.915812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.915853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.916197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.916237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.916626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.916639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.916951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.916991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.917242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.917281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.917695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.917755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.918139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.918181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.918506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.918547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.918849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.918892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.919212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.919253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.919646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.919687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.919970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.920012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.920359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.920400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.920740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.920784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.921183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.921225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.921463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.921505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.921878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.921927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.922324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.922365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.922674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.922688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.923059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.923074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.923340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.923382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.923781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.923823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.924225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.924265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.924663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.924704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.925036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.925079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.925326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.925367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.925741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.925784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.926138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.926180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.926489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.926530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.926871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.926913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.927310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.927324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.927573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.927609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.928006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.928048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.928446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.928488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.928795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.928831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.574 qpair failed and we were unable to recover it. 00:29:12.574 [2024-07-25 10:44:15.929130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.574 [2024-07-25 10:44:15.929170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.929554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.929595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.929984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.930028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.930337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.930377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.930697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.930750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.931124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.931166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.931571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.931613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.931879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.931922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.932234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.932276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.932614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.932656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.933006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.933049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.933372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.933413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.933800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.933842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.934235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.934275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.934644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.934686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.935073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.935116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.935491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.935532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.935930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.935972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.936272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.936313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.936650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.936691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.936979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.937020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.937391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.937407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.937745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.937787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.938185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.938226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.938612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.938653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.939058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.939101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.939495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.939536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.939947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.939989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.940383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.940425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.940740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.940783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.941105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.941147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.941474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.941515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.941856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.941899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.942295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.942336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.942576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.942592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.942926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.942940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.943288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.943330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.575 qpair failed and we were unable to recover it. 00:29:12.575 [2024-07-25 10:44:15.943672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.575 [2024-07-25 10:44:15.943725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.944037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.944079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.944437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.944478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.944804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.944847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.945164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.945204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.945593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.945607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.945935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.945949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.946267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.946308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.946682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.946733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.947128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.947181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.947527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.947568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.947982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.948025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.948333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.948347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.948707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.948759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.949077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.949118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.949504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.949544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.949939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.949982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.950377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.950417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.950738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.950781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.951096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.951138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.951520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.951534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.951723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.951738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.952075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.952116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.952479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.952520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.952936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.952984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.953282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.953323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.953710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.953765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.954024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.954066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.954386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.954427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.954665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.954706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.955094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.955136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.955400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.955441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.955828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.955870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.956265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.956306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.956697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.956748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.957120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.957161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.957487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.957528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.957824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.576 [2024-07-25 10:44:15.957867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.576 qpair failed and we were unable to recover it. 00:29:12.576 [2024-07-25 10:44:15.958270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.958285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.958464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.958477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.958854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.958896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.959221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.959262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.959584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.959599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.959940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.959955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.960161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.960202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.960526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.960568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.960957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.960999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.961317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.961331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.961690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.961704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.962069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.962084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.962437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.962478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.962881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.962924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.963301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.963342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.963711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.963730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.964063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.964078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.964261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.964302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.964611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.964652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.965044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.965088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.965418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.965459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.965758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.965800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.966029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.966071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.966392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.966434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.966703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.966746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.967081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.967122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.967519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.967566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.967853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.967868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.968185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.968226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.968623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.968664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.968991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.969033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.969453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.577 [2024-07-25 10:44:15.969493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.577 qpair failed and we were unable to recover it. 00:29:12.577 [2024-07-25 10:44:15.969740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.969754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.970035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.970076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.970382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.970423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.970738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.970753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.971054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.971068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.971397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.971438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.971833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.971875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.972271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.972313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.972691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.972763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.973139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.973180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.973443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.973457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.973787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.973802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.974062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.974077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.974379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.974394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.974675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.974690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.975030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.975072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.975416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.975458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.975870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.975912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.976307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.976349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.976586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.976627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.977041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.977084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.977557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.977643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.978036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.978082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.978404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.978446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.978819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.978867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.979237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.979277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.979689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.979740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.980056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.980098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.980467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.980508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.980829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.980872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.981188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.981229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.981487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.981507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.981834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.981876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.982250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.982291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.982687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.982738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.983079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.983120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.578 [2024-07-25 10:44:15.983529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.578 [2024-07-25 10:44:15.983570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.578 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.983886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.983927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.984343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.984385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.984785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.984804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.985130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.985171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.985505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.985547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.985843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.985863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.986232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.986273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.986667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.986709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.987033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.987075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.987448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.987489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.987884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.987926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.988298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.988345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.988757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.988800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.989101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.989142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.989542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.989585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.989931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.989974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.990361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.990402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.990746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.990790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.991112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.991154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.991472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.991513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.991908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.991952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.992349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.992390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.992799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.992842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.993241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.993282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.993675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.993725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.994148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.994190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.994565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.994605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.995003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.995046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.995441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.995483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.995790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.995810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.996088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.996129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.996426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.996468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.996783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.996825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.997221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.997263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.997584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.997626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.998031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.998074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.998468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.998510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.998847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.579 [2024-07-25 10:44:15.998867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.579 qpair failed and we were unable to recover it. 00:29:12.579 [2024-07-25 10:44:15.999109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:15.999131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:15.999390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:15.999409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:15.999756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:15.999799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.000152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.000193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.000601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.000643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.001053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.001096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.001348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.001390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.001658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.001699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.002091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.002133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.002514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.002556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.002885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.002927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.003299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.003340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.003738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.003758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.004105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.004146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.004555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.004597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.004895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.004949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.005341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.005383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.005712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.005739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.006074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.006116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.006490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.006531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.006823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.006843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.007212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.007253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.007662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.007705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.008112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.008154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.008550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.008592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.008936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.008979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.009316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.009357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.009661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.009709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.010068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.010110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.010503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.010545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.010924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.010967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.011360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.011401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.011747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.011767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.012038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.012089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.580 qpair failed and we were unable to recover it. 00:29:12.580 [2024-07-25 10:44:16.012387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.580 [2024-07-25 10:44:16.012427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.012743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.012786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.013161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.013203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.013591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.013632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.013961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.014005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.014379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.014421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.014815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.014857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.015187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.015230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.015616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.015657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.016061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.016104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.016497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.016539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.016932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.016974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.017369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.017411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.017707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.017774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.018121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.018163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.018556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.018597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.018892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.018935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.019281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.019330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.019584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.019624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.019929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.019971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.020380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.020422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.020813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.020833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.021100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.021119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.021372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.021391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.021816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.021858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.022162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.022204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.022548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.022589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.022982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.023024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.023417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.023458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.023855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.023897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.024198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.024239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.024635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.024676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.025088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.025130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.025431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.025472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.025826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.025866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.026202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.026244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.581 [2024-07-25 10:44:16.026647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.581 [2024-07-25 10:44:16.026688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.581 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.027093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.027135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.027535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.027576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.027954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.027996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.028330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.028370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.028768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.028810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.029185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.029226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.029617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.029658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.030029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.030048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.030238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.030257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.030521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.030540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.030913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.030956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.031290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.031332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.031680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.031732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.032126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.032168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.032561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.032603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.032996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.033038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.033427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.033445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.033734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.033777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.034106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.034148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.034523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.034565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.034955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.034997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.035325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.035374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.035732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.035775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.036170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.036211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.036607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.036654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.037060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.037103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.037400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.037441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.037835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.037878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.582 [2024-07-25 10:44:16.038273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-07-25 10:44:16.038314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.582 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.038577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.038619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.038921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.038964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.039358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.039400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.039787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.039830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.040151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.040192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.040497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.040539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.040909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.040951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.041252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.041293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.041672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.041726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.042129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.042170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.042562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.042604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.042924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.042944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.043239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.043280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.043605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.043646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.043993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.044012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.044265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.044304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.044733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.044775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.045144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.045184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.045573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.045615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.046024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.046066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.046436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.046477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.046867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.046886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.047233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.047254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.047602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.047643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.047972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.048015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.048409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.048450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.048843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.048886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.049281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.049323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.049619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.049660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.050075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.050117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.050513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.050554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.050938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.050957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.051278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.051298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.051661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.051702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.052129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.052172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.052515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-07-25 10:44:16.052556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.583 qpair failed and we were unable to recover it. 00:29:12.583 [2024-07-25 10:44:16.052974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.053016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.053330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.053371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.053676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.053695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.053934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.053953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.054199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.054217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.054505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.054524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.054807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.054851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.055239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.055280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.055581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.055622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.056015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.056057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.056450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.056491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.056862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.056904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.057210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.057251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.057557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.057578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.057932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.057974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.058346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.058387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.058704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.058730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.059081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.059123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.059467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.059508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.059916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.059936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.060200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.060242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.060638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.060678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.061081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.061123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.061445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.061486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.061805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.061825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.062072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.062091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.062343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.062362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.062743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.062762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.063058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.063077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.063363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.063404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.063799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.063841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.064234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.064275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.064669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.064710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.065024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.065043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.065335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.065375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.065753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-07-25 10:44:16.065796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.584 qpair failed and we were unable to recover it. 00:29:12.584 [2024-07-25 10:44:16.066128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.066169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.066565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.066606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.066969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.066989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.067330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.067349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.067598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.067616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.067889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.067931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.068330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.068372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.068685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.068736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.069070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.069088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.069390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.069431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.069799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.069841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.070238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.070280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.070678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.070730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.071104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.071145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.071539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.071580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.071971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.072013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.072327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.072369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.072761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.072803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.073045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.073092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.073486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.073528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.073921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.073964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.074359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.074401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.074732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.074752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.075089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.075130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.075523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.075565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.075960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.076003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.076308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.076349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.076736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.076779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.077077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.077118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.077564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.077606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.078017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.078059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.078383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.078425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.078818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.078837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.079183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.079224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.079617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.585 [2024-07-25 10:44:16.079658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.585 qpair failed and we were unable to recover it. 00:29:12.585 [2024-07-25 10:44:16.079982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.080002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.080347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.080389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.080775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.080795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.081137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.081179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.081573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.081614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.082008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.082050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.082299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.082340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.082658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.082699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.083083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.083125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.083518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.083559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.083828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.083881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.084281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.084323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.084635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.084654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.085000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.085049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.085371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.085412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.085730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.085750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.086054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.086095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.086487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.086528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.086808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.086827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.087220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.087261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.087592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.087641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.087990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.088033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.088413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.088455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.088858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.088901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.089238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.089280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.089652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.089693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.090095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.090136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.090437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.090479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.090864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.090883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.091205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.091247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.091641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.091683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.092088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.092129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.092545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.092586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.092893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.092913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.093160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.093178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.093521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.093562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.093865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.586 [2024-07-25 10:44:16.093908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.586 qpair failed and we were unable to recover it. 00:29:12.586 [2024-07-25 10:44:16.094322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.094369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.094666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.094686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.095044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.095086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.095352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.095393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.095648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.095667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.096012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.096054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.096394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.096435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.096855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.096899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.097295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.097337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.097674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.097726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.098137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.098178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.098481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.098522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.098834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.098853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.099139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.099180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.099546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.099588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.099994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.100037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.100434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.100476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.100848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.100890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.101216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.101257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.101630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.101672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.101976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.101995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.102337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.102356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.102682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.102732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.103071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.103112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.103509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.103550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.103867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.103887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.104238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.104278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.104667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.104708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.105083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.105126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.105499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.105539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.105930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.105973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.106287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.106329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.587 [2024-07-25 10:44:16.106729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.587 [2024-07-25 10:44:16.106772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.587 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.107087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.107128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.107522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.107563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.107862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.107882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.108236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.108277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.108573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.108613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.108998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.109040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.109439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.109480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.109803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.109845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.110247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.110288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.110605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.110623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.110976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.111017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.111334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.111375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.111676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.111727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.111995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.112037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.112334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.112375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.112769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.112811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.113186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.113227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.113618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.113660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.114079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.114122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.114514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.114555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.114946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.114989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.115383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.115424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.115833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.115875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.116194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.116236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.116565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.116606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.116999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.117041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.117410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.117451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.117821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.117863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.118260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.118301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.118697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.118837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.119208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.119248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.119621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.119662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.120053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.120096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.120401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.120442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.588 qpair failed and we were unable to recover it. 00:29:12.588 [2024-07-25 10:44:16.120834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.588 [2024-07-25 10:44:16.120890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.121218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.121265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.121687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.121733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.122076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.122095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.122439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.122480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.122860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.122902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.123271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.123312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.123613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.123655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.123969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.124012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.124404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.124445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.124845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.124865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.125194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.125235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.125565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.125606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.125995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.126017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.126290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.126309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.126617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.126658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.127064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.127106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.127500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.127541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.127864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.127907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.128301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.128342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.128727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.128782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.129065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.129084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.129345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.129386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.129802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.129846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.130148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.130190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.130585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.130626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.130934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.130954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.131296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.131315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.131652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.131700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.132084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.132127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.132500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.132540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.132942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.132961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.133282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.133300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.133547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.133566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.133937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.134000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.134269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.134309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.134629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.134671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.589 qpair failed and we were unable to recover it. 00:29:12.589 [2024-07-25 10:44:16.135078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.589 [2024-07-25 10:44:16.135120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.135443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.135484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.135791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.135810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.136137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.136178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.136575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.136616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.136994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.137036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.137432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.137473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.137868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.137888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.138210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.138251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.138642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.138683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.139040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.139083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.139464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.139505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.139816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.139835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.140189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.140207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.140554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.140595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.140865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.140886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.141167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.141185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.141533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.141551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.141800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.141819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.142131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.142172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.142417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.142458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.142793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.142812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.143080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.143127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.143527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.143569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.143934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.143954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.144278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.144319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.144623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.144665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.145059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.145101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.145497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.145538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.145933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.145976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.146303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.146345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.146663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.146705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.147116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.147158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.147482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.147523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.147838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.147880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.148213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.148255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.590 [2024-07-25 10:44:16.148650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.590 [2024-07-25 10:44:16.148692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.590 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.149075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.149117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.149441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.149483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.149741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.149783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.150169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.150210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.150532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.150573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.150895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.150914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.151234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.151254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.151618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.151660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.152085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.152127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.152390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.152432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.152787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.152829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.153229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.153271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.153537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.153579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.153876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.153896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.154221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.154262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.154569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.154610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.154924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.154944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.155222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.155241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.155511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.155552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.155882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.155924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.156173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.156214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.156517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.156558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.156948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.156996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.157372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.157412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.157744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.157787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.158188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.158207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.158490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.158509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.158702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.158726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.158899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.158918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.159243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.159284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.159658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.159699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.160027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.160069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.160441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.160460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.160711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.160736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.161101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.161142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.161543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.161583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.161987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.162030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.162390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.591 [2024-07-25 10:44:16.162408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.591 qpair failed and we were unable to recover it. 00:29:12.591 [2024-07-25 10:44:16.162654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.162672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.162935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.162955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.163216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.163267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.163641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.163682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.163991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.164032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.164415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.164456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.164870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.164913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.165252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.165293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.165676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.165731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.166091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.166130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.166455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.166496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.166828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.166876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.167298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.167339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.167646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.167687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.167979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.167998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.168267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.168286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.168558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.168577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.168839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.168858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.169141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.169182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.169512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.169554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.169944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.169987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.170308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.170349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.170745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.170789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.171129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.171170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.171438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.171479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.171860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.171903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.172273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.172314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.172707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.172757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.173052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.173102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.173440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.173481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.173873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.173915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.174209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.174227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-07-25 10:44:16.174503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.592 [2024-07-25 10:44:16.174522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.174846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.174865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.175157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.175198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.175522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.175563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.175876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.175896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.176198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.176240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.176647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.176705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.177062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.177081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.177349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.177368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.177556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.177573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.177939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.177958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.178138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.178157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.178465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.178506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.178902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.178946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.179242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.179261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.179635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.179676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.180005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.180047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.180440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.180459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.180806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.180848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.181248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.181289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.181698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.181756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.182015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.182034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.182360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.182379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.182576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.182595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.182919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.182963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.183266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.183307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.183633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.183673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.184003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.184044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.184358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.184377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.184624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.184643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.184997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.185039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.185434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.185475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.185817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.185860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-07-25 10:44:16.186206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.593 [2024-07-25 10:44:16.186248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.186588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.186630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.186949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.186992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.187361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.187380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.187787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.187830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.188170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.188211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.188628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.188669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.189051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.189093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.189360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.189399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.189746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.189788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.190177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.190195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.190560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.190602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.190916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.190936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.191294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.191335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.191737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.191779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.192170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.192211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.192514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.192556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.192851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.192871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.193219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.193260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.193693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.193747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.194095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.194136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.194389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.194430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.194740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.194782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.195176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.195218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.195543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.195584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.195884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.195926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.196313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.196354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.196611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.196653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.196949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.196969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.197222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.197241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.197587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.197606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.197950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.197993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.198367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.198409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.198803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.198845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.199161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.199203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.199445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.199487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.199860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.199902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.594 [2024-07-25 10:44:16.200204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.594 [2024-07-25 10:44:16.200245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.200638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.200679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.201093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.201135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.201555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.201596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.201955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.202004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.202309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.202328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.202683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.202735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.203134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.203175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.203569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.203610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.203930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.203950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.204241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.204282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.204675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.204725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.205120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.205162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.205550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.205591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.205984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.206027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.206401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.206443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.206835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.206878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.207274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.207315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.207739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.207773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.208119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.208162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.208470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.208512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.208904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.208945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.209337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.209378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.209751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.209794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.210114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.210134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.210481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.210523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.210759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.210802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.211128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.211147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.211483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.211525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.211936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.211981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.212228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.212247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.212652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.212673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.212955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.212997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.213308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.213349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.213756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.213800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.595 qpair failed and we were unable to recover it. 00:29:12.595 [2024-07-25 10:44:16.214120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.595 [2024-07-25 10:44:16.214140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.214458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.214498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.214887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.214929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.215248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.215290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.215685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.215738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.216111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.216153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.216510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.216552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.216935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.216977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.217346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.217365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.217711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.217762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.218104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.218145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.218530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.218572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.218899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.218942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.219325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.219343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.219621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.219639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.219982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.220021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.220414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.220456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.220851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.220893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.221243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.221283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.221594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.221636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.222061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.222104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.222498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.222540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.222911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.222953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.223290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.223333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.223736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.223778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.224101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.224120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.224465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.224506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.224898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.224940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.225256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.225297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.225690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.225743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.226110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.226129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.226392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.226411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.226774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.226816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.227232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.227273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.227619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.227660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.228057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.228099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.228430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.228449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.596 [2024-07-25 10:44:16.228785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.596 [2024-07-25 10:44:16.228805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.596 qpair failed and we were unable to recover it. 00:29:12.597 [2024-07-25 10:44:16.229097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.597 [2024-07-25 10:44:16.229139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.597 qpair failed and we were unable to recover it. 00:29:12.869 [2024-07-25 10:44:16.229521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-07-25 10:44:16.229564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-07-25 10:44:16.229890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-07-25 10:44:16.229934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-07-25 10:44:16.230356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-07-25 10:44:16.230397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-07-25 10:44:16.230797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-07-25 10:44:16.230835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-07-25 10:44:16.231193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-07-25 10:44:16.231213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-07-25 10:44:16.231528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-07-25 10:44:16.231547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-07-25 10:44:16.231842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-07-25 10:44:16.231862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-07-25 10:44:16.232200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-07-25 10:44:16.232219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-07-25 10:44:16.232486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-07-25 10:44:16.232505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-07-25 10:44:16.232940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-07-25 10:44:16.232982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-07-25 10:44:16.233290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.233310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.233561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.233599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.234030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.234073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.234465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.234485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.234794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.234835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.235250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.235292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.235685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.235735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.236130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.236172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.236545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.236585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.236940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.236982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.237388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.237429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.237862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.237921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.238342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.238384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.238779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.238820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.239167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.239208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.239499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.239557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.239954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.239996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.240327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.240345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.240685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.240740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.241133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.241182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.241532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.241574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.241954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.241996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.242388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.242407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.242668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.242688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.243047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.243090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.243487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.243529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.243921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.243964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.244356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.244375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.244728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.244771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.245132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.245173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.245579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.245621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.246022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.246064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.246437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.246477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.246786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.246828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.247224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.247265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.247659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.870 [2024-07-25 10:44:16.247700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.870 qpair failed and we were unable to recover it. 00:29:12.870 [2024-07-25 10:44:16.248107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.248150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.248520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.248561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.248945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.248986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.249380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.249421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.249813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.249855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.250150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.250170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.250417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.250439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.250713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.250736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.251088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.251130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.251442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.251483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.251855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.251898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.252295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.252337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.252735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.252776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.253183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.253224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.253614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.253656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.253913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.253956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.254350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.254391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.254766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.254810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.255103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.255123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.255448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.255491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.255847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.255890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.256216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.256256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.256598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.256639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.257039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.257082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.257454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.257496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.257869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.257911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.258307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.258349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.258746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.258788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.259184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.259227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.259533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.259575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.259971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.260013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.260335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.260380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.260705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.260756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.261081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.261129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.261498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.261519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.261786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.261807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.262052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.262071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.871 [2024-07-25 10:44:16.262340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.871 [2024-07-25 10:44:16.262382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.871 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.262757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.262800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.263114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.263156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.263547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.263588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.263984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.264026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.264425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.264467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.264791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.264833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.265227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.265269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.265585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.265625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.265947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.265990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.266363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.266382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.266645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.266664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.267006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.267049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.267375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.267416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.267731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.267773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.268100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.268142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.268511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.268553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.268940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.268985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.269285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.269326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.269729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.269770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.270172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.270212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.270606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.270648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.271065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.271109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.271409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.271450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.271848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.271891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.272305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.272346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.272667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.272708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.273109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.273151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.273533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.273575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.273953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.273995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.274275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.274294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.274641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.274661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.275010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.275052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.275372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.275414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.275760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.275802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.276183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.276224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.276530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.276550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.276908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.872 [2024-07-25 10:44:16.276928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.872 qpair failed and we were unable to recover it. 00:29:12.872 [2024-07-25 10:44:16.277257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.277298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.277560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.277601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.278092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.278134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.278595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.278636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.279090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.279134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.279400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.279440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.279822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.279865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.280191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.280233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.280607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.280648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.280951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.280994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.281344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.281384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.281784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.281826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.282169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.282210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.282556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.282597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.282870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.282913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.283215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.283234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.283591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.283632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.283982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.284022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.284202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.284220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.284526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.284567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.284940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.284983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.285351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.285393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.285798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.285841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.286182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.286223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.286632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.286673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.286999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.287042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.287348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.287369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.287559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.287578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.287864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.287906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.288175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.288217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.288549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.288568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.288860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.288903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.289243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.289284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.289654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.289701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.289964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.290005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.873 qpair failed and we were unable to recover it. 00:29:12.873 [2024-07-25 10:44:16.290394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.873 [2024-07-25 10:44:16.290435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.290745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.290788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.291116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.291157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.291547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.291590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.291920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.291963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.292276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.292296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.292536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.292556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.292819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.292876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.293251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.293294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.293680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.293699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.293928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.293969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.294319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.294368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.294712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.294779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.295096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.295138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.295545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.295586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.295952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.295994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.296313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.296331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.296685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.296740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.297068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.297114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.297553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.297595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.297895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.297939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.298264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.298306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.298649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.298691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.299050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.299091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.299450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.299470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.299837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.299879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.300220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.300262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.300678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.300728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.301105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.301148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.301535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.301554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.874 [2024-07-25 10:44:16.301831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.874 [2024-07-25 10:44:16.301851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.874 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.302050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.302067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.302296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.302316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.302608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.302650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.302980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.303023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.303291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.303310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.303565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.303606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.304028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.304071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.304323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.304364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.304760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.304805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.305123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.305143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.305434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.305475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.305771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.305814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.306134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.306176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.306476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.306518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.306933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.306976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.307380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.307421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.307843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.307886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.308326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.308367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.308785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.308828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.309197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.309239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.309644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.309685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.310069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.310111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.310386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.310429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.310805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.310848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.311243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.311284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.311547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.311588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.311993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.312036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.312382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.312426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.312833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.312875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.313213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.313252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.313648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.313689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.314022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.314063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.314377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.314397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.314761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.314805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.315144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.315185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.315613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.315654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.315963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.875 [2024-07-25 10:44:16.316005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.875 qpair failed and we were unable to recover it. 00:29:12.875 [2024-07-25 10:44:16.316343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.316384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.316702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.316726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.317075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.317117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.317421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.317463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.317854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.317897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.318219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.318260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.318592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.318612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.318901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.318943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.319294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.319336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.319703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.319728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.320009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.320052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.320454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.320498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.320894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.320936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.321258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.321299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.321697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.321749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.322118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.322160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.322421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.322462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.322807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.322849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.323095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.323142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.323517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.323559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.323940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.323985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.324313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.324354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.324745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.324787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.325166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.325209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.325624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.325667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.326074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.326124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.326374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.326393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.326739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.326782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.327082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.327123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.327474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.327519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.327802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.327844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.328232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.328275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.328677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.328728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.329076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.329117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.329402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.329421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.329811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.876 [2024-07-25 10:44:16.329853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.876 qpair failed and we were unable to recover it. 00:29:12.876 [2024-07-25 10:44:16.330177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.330219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.330551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.330592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.330912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.330956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.331237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.331287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.331656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.331698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.332052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.332098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.332397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.332438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.332822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.332865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.333276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.333317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.333713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.333778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.334155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.334204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.334624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.334666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.335005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.335049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.335387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.335429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.335845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.335887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.336195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.336215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.336563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.336582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.336846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.336865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.337165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.337205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.337529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.337570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.337949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.337991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.338268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.338309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.338701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.338771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.339101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.339143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.339564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.339605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.339915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.339958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.340216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.340236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.340556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.340575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.340959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.340978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.341226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.341245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.341629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.341671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.341972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.342014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.342391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.342433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.342668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.342687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.342960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.343002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.343374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.343416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.877 qpair failed and we were unable to recover it. 00:29:12.877 [2024-07-25 10:44:16.343806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.877 [2024-07-25 10:44:16.343828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.344103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.344122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.344422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.344463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.344782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.344825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.345217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.345258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.345566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.345585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.345856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.345909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.346187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.346228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.346642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.346684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.346959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.347001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.347399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.347440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.347696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.347748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.348078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.348119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.348364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.348405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.348802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.348846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.349152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.349193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.349507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.349527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.349835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.349876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.350142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.350184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.350561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.350601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.350931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.350973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.351206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.351247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.351687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.351756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.352154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.352195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.352579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.352620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.352990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.353032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.353408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.353449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.353762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.353804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.354136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.354177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.354494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.354536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.354950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.354993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.355259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.355279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.355487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.355506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.355784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.355803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.356011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.356030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.356376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.356418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.356802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.356845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.878 [2024-07-25 10:44:16.357168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.878 [2024-07-25 10:44:16.357219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.878 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.357446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.357465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.357759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.357779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.357990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.358009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.358363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.358407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.358779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.358822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.359124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.359166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.359511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.359552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.359880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.359924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.360137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.360178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.360624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.360664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.361054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.361097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.361487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.361529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.362193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.362226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.363245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.363284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.363677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.363738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.364070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.364112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.364527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.364578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.364938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.364981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.365264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.365306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.365704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.365776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.366107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.366150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.367318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.367357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.367767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.367788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.368017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.368060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.369536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.369577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.369896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.369942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.370262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.370323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.370766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.370801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.371116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.371148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.879 [2024-07-25 10:44:16.371510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.879 [2024-07-25 10:44:16.371536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.879 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.371847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.371878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.372194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.372222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.372586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.372612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.372943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.372972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.373241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.373276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.373511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.373538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.373822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.373848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.374148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.374173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.374388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.374414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.374699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.374734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.375023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.375049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.375288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.375318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.375680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.375712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.376009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.376036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.376329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.376355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.376587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.376612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.376950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.376977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.377323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.377352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.377649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.377680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.377959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.377986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.378313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.378339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.378687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.378725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.378961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.378986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.379342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.379370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.379571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.379602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.379917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.379946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.380237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.380263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.380656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.380696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.380988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.381020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.381320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.381355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.381621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.381651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.381974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.382004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.382353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.382388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.382677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.382706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.880 [2024-07-25 10:44:16.382924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.880 [2024-07-25 10:44:16.382951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.880 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.383293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.383349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.383683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.383764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.384112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.384157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.384479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.384529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.384875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.384919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.385192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.385233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.385648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.385691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.386022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.386063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.386459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.386501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.386865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.386908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.387220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.387261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.387551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.387593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.387873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.387917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.388182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.388224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.388617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.388634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.388952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.388969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.389225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.389240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.389588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.389604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.389858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.389907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.391213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.391245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.391551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.391567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.391860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.391907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.392276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.392327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.392557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.392572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.392844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.392859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.393062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.393077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.393337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.393352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.393549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.393568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.393861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.393906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.394244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.394285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.394586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.394604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.394819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.394837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.395192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.395235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.395679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.881 [2024-07-25 10:44:16.395770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:12.881 qpair failed and we were unable to recover it. 00:29:12.881 [2024-07-25 10:44:16.397069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.397105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.397336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.397351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.397701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.397760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.398132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.398173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.398513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.398556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.398907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.398950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.399184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.399226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.399481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.399523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.401042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.401072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.401427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.401471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.401805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.401847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.402119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.402160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.402480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.402530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.402866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.402908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.403250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.403291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.403595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.403631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.403968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.404009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.405266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.405295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.405527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.405543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.405827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.405842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.407148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.407193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.407495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.407510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.407831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.407875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.408188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.408230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.408571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.408585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.408866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.408882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.409162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.409176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.409442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.409483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.409735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.409777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.411150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.411177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.411499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.411514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.411746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.411789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.413242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.413271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.413624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.882 [2024-07-25 10:44:16.413639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.882 qpair failed and we were unable to recover it. 00:29:12.882 [2024-07-25 10:44:16.413961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.414006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.414329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.414370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.414746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.414761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.415050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.415064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.415388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.415403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.415769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.415812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.416073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.416114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.416378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.416419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.416744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.416785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.417060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.417101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.417519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.417561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.417816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.417861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.419058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.419085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.419445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.419488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.421097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.421125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.421452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.421469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.421809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.421852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.422188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.422229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.422548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.422597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.422953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.422968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.423213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.423227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.423435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.423449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.423731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.423775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.425030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.425058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.425238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.425252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.425581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.425623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.425959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.426001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.426320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.426361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.426707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.426726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.426976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.426990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.427210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.427224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.427414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.427428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.428439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.428467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.428745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.428759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.428968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.429016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.429988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.883 [2024-07-25 10:44:16.430014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.883 qpair failed and we were unable to recover it. 00:29:12.883 [2024-07-25 10:44:16.430306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.430321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.430626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.430667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.431035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.431076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.431445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.431486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.431821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.431836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.432030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.432070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.432421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.432474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.432740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.432754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.433020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.433056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.433366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.433444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.433775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.433822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.434126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.434168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.434555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.434598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.434994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.435037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.436475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.436508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.436808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.436829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.437127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.437170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.437399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.437417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.437696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.437722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.438447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.438474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.438724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.438744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.439005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.439024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.439277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.439299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.439512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.439530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.439706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.439728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.439931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.439949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.440164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.440182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.440492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.440510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.440722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.440742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.440943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.440961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.884 [2024-07-25 10:44:16.441223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.884 [2024-07-25 10:44:16.441241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.884 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.441485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.441504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.441777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.441796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.442040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.442058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.442347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.442365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.442562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.442580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.442926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.442945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.443123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.443140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.443459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.443477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.443663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.443682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.443870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.443889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.444166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.444185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.444497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.444515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.444689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.444706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.444907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.444926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.445274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.445301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.445529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.445543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.445768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.445782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.446134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.446147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.446341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.446361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.446676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.446694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.446961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.446980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.447246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.447264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.447544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.447562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.447869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.447888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.448154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.448172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.448484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.448502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.448811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.448829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.449115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.449134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.449389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.449407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.449589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.449607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.449937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.449956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.450284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.450302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.450641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.450659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.450925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.885 [2024-07-25 10:44:16.450944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.885 qpair failed and we were unable to recover it. 00:29:12.885 [2024-07-25 10:44:16.451198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.451216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.451509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.451526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.451776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.451795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.452037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.452055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.452386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.452404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.452661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.452679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.453017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.453036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.453275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.453293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.453536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.453554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.453760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.453779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.454114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.454132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.454377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.454395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.454648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.454666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.454868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.454886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.455069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.455087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.455393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.455411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.455618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.455636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.455889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.455907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.456107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.456124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.456365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.456384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.456740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.456758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.456943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.456961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.457201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.457220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.457464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.457482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.457667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.457688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.457932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.457950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.458227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.458256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.458496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.458514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.458686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.458704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.458900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.458918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.459250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.459269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.459504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.459522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.459832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.459850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.460096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.460115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.460242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.886 [2024-07-25 10:44:16.460259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-25 10:44:16.460507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.460524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.460834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.460853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.461112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.461130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.461379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.461397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.461634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.461652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.461966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.461985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.462244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.462261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.462527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.462545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.462701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.462724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.462900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.462919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.463178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.463196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.463469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.463487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.463739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.463757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.464065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.464083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.464410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.464428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.464705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.464728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.465009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.465027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.465282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.465300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.465610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.465628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.465983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.466002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.466122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.466140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.466450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.466468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.466730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.466749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.466966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.466984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.467329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.467347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.467600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.467618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.467861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.467879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.468197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.468215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.468472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.468490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.468739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.468759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.468967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.468984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.469185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.469203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.469471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.469489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.469730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.469748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.470070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.470088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.470420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.470437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-25 10:44:16.470691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.887 [2024-07-25 10:44:16.470709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.471042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.471061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.471391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.471409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.471649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.471667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.471948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.471966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.472173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.472191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.472467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.472485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.472736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.472755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.473086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.473104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.473283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.473301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.473540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.473558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.473892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.473911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.474170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.474188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.474499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.474518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.474755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.474773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.475028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.475046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.475247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.475265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.475593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.475611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.475878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.475896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.476237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.476255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.476496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.476515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.476838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.476857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.477130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.477148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.477384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.477402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.477736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.477754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.478028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.478046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.478370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.478388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.478651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.478670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.478908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.478926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.479231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.479249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.479421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.479439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.479746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.479764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.480072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.480089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.480328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.480348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.480654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.480672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.480920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.480938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-25 10:44:16.481136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.888 [2024-07-25 10:44:16.481153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.481401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.481421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.481676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.481692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.482017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.482036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.482343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.482360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.482666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.482684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.483020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.483039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.483299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.483317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.483573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.483591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.483908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.483926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.484180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.484198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.484528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.484546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.484784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.484802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.485037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.485055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.485405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.485422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.485726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.485745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.486016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.486034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.486352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.486370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.486698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.486718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.486998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.487016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.487267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.487284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.487615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.487632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.487942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.487960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.488205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.488223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.488505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.488522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.488761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.488779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.489021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.489039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.489330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.489348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.489590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.489608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.489863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.489881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.889 qpair failed and we were unable to recover it. 00:29:12.889 [2024-07-25 10:44:16.490090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.889 [2024-07-25 10:44:16.490107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.490381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.490399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.490646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.490664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.490913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.490932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.491254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.491272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.491530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.491548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.491789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.491808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.492010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.492033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.492362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.492379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.492564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.492582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.492842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.492860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.493114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.493132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.493316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.493334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.493597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.493616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.493882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.493900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.494157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.494175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.494430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.494448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.494772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.494790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.495041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.495058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.495312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.495329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.495600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.495618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.495890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.495908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.496235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.496253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.496447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.496465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.496773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.496791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.497032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.497050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.497245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.497263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.497526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.497544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.497733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.497751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.498079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.498098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.498422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.498439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.498760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.498778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.499028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.499046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.499350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.499368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.499697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.499717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.500046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.500064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.500271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.500289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.890 [2024-07-25 10:44:16.500545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.890 [2024-07-25 10:44:16.500563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.890 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.500766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.500784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.501112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.501130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.501456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.501474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.501657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.501675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.502004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.502023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.502276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.502294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.502597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.502614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.502864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.502882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.503198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.503216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.503492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.503512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.503753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.503771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.504079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.504097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.504357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.504375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.504564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.504582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.504931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.504949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.505190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.505208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.505459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.505477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.505801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.505818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.506076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.506094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.506368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.506386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.506717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.506735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.506943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.506961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.507234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.507252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.507497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.507515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.507774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.507792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.508117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.508134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.508368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.508386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.508733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.508751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.508934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.508951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.509136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.509153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.509410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.509428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.509748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.509766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.510024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.510042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.510234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.510250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.891 [2024-07-25 10:44:16.510579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.891 [2024-07-25 10:44:16.510597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.891 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.510841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.510859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.511036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.511053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.511302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.511320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.511492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.511509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.511814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.511832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.512084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.512102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.512366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.512383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.512711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.512734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.512998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.513016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.513223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.513240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.513485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.513503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.513755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.513773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.514106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.514124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.514383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.514400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.514720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.514740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.514976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.514994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.515247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.515264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.515568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.515586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.515909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.515927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.516176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.516194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.516520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.516538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.516792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.516811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.517118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.517136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.517380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.517398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.517654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.517671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.517983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.518001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.518261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.518279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.518627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.518644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.518911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.518930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.519164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.519182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.519524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.519541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.519797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.519815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.520054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.520071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.520351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.520369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.520719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.520737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.892 qpair failed and we were unable to recover it. 00:29:12.892 [2024-07-25 10:44:16.521014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.892 [2024-07-25 10:44:16.521032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.521214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.521232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.521490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.521507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.521762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.521780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.522034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.522052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.522354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.522372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.522654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.522672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.522926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.522944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.523267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.523285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.523530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.523548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.523799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.523816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.524054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.524072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.524393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.524411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.524725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.524742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.524939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.524956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.525206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.525223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.525476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.525494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.525796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.525814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.526144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.526163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.526493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.526513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.526755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.526772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.527103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.527120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.527380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.527398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.527586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.527604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.527932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.527950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.528230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.528247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.528554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.528571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.528849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.528867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.529057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.529075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.529397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.529415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.529723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.529742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.530073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.530090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.530339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.530356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.530607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.530625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.530863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.530881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.893 [2024-07-25 10:44:16.531206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.893 [2024-07-25 10:44:16.531223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.893 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.531533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.531551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.531854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.531872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.532182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.532200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.532459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.532477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.532713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.532735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.533091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.533109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.533349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.533367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.533616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.533634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.533962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.533980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.534235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.534252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.534502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.534519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.534760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.534778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.535049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.535067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.535322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.535340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.535589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.535607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.535868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.535886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.536191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.536208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.536511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.536528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.536825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.536843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.537078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.537096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.537346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.537363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.537602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.537619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.537947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.537964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.538162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.538183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.538489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.538506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.538833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.538850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.539084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.539102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.539431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.539448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.539750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.539767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.894 [2024-07-25 10:44:16.540096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.894 [2024-07-25 10:44:16.540114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.894 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.540384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.540402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.540707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.540728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.540990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.541007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.541339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.541357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.541681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.541699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.542041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.542060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.542389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.542407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.542512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.542529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.542787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.542805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.542928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.542946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.543199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.543216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.543407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.543424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.543747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.543765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.544034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.544051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.544240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.544258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.544581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.544599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.544873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.544891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.545219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.545236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.545434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.545452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.545719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.545737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.545975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.545993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.546181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.546199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.546465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.546482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.546737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.546755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.547014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.547032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.547298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.547316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.547644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.547662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.547834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.547852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.548105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.548122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.548461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.548479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.548749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.548767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.548954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.548972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.549329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.549347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.549525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.549547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.549897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.549915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.550187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.895 [2024-07-25 10:44:16.550204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.895 qpair failed and we were unable to recover it. 00:29:12.895 [2024-07-25 10:44:16.550509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.550527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.550782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.550800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.551035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.551053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.551305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.551322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.551524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.551542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.551736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.551754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.551939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.551956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.552120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.552137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.552390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.552408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.552718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.552737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.553051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.553068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.553332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.553350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.553623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.553640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.553875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.553893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.554158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.554176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.554417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.554434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.554621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.554639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.554889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.554907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.555159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.555177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.555419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.555437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.555627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.555645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.555885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.555903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.556157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.556175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.556478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.556496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.556746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.556764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.556951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.556969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.557302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.557320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.557500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.557518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.557752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.557770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.558070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.558088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.558355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.558373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:12.896 [2024-07-25 10:44:16.558728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.896 [2024-07-25 10:44:16.558746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:12.896 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.559095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.559114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.559414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.559434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.559725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.559743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.560045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.560063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.560328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.560346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.560685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.560706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.560959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.560977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.561273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.561291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.561595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.561612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.561863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.561882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.562136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.562153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.562359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.562376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.562697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.562726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.563033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.563051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.563170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.563187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.563458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.563475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.563737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.563755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.564088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.564105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.564416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.564433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.564764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.564782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.564969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.564987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.565186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.565203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.565565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.565583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.565909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.565927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.566096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.566113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.566300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.566318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.566552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.566569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.566817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.566835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.567015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.567033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.567277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.567294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.567559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.567577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.174 [2024-07-25 10:44:16.567908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.174 [2024-07-25 10:44:16.567926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.174 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.568179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.568197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.568374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.568392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.568666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.568684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.568944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.568962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.569284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.569301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.569539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.569557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.569829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.569847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.570085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.570103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.570430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.570447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.570726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.570744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.570999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.571016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.571249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.571267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.571473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.571490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.571811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.571832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.572110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.572128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.572320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.572338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.572575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.572593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.572918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.572936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.573269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.573286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.573626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.573644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.573949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.573967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.574290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.574308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.574587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.574605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.574860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.574877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.575184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.575201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.575449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.575467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.575718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.575737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.576085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.576102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.576351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.576369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.576606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.576623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.576898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.576916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.577109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.577126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.577361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.577379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.577656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.577674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.577982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.578000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.578248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.578265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.175 [2024-07-25 10:44:16.578436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.175 [2024-07-25 10:44:16.578454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.175 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.578791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.578809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.579057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.579075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.579376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.579393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.579726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.579743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.580095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.580112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.580434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.580452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.580643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.580660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.580964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.580982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.581241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.581259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.581427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.581445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.581682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.581699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.582019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.582037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.582363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.582381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.582638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.582656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.582983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.583001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.583196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.583213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.583476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.583496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.583830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.583849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.584096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.584114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.584348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.584365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.584697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.584717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.584994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.585012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.585315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.585332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.585589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.585607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.585926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.585944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.586269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.586286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.586546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.586564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.586819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.586836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.587090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.587108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.587358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.587376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.587568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.587586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.587851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.587869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.588116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.588134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.588314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.588332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.588682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.588700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.589009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.589027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.589280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.176 [2024-07-25 10:44:16.589298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.176 qpair failed and we were unable to recover it. 00:29:13.176 [2024-07-25 10:44:16.589544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.589562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.589761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.589779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.590038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.590056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.590186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.590203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.590383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.590401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.590737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.590755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.591003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.591041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.591419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.591449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.591700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.591720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.591956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.591970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.592195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.592209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.592377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.592390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.592628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.592642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.592752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.592765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.593070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.593083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.593425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.593439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.593755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.593768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.594005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.594018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.594265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.594278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.594455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.594469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.594707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.594725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.594889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.594903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.595071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.595085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.595327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.595340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.595565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.595579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.595890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.595903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.596195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.596208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.596513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.596527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.596843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.596857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.597015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.597029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.597207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.597220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.597536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.597550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.597786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.597800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.598002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.598015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.598324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.598337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.177 qpair failed and we were unable to recover it. 00:29:13.177 [2024-07-25 10:44:16.598624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.177 [2024-07-25 10:44:16.598637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.598895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.598909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.599134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.599148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.599408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.599421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.599647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.599660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.599954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.599968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.600261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.600275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.600435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.600448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.600741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.600754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.601048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.601061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.601288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.601301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.601472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.601488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.601804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.601817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.602045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.602058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.602300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.602313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.602608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.602622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.602867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.602881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.603209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.603222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.603461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.603475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.603769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.603782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.604094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.604107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.604402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.604416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.604647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.604660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.604895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.604909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.605210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.605223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.605411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.605425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.605661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.605674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.178 [2024-07-25 10:44:16.605897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.178 [2024-07-25 10:44:16.605911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.178 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.606213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.606226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.606478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.606491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.606679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.606693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.606949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.606962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.607269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.607283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.607533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.607546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.607839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.607853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.608097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.608110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.608413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.608426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.608692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.608705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.608942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.608957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.609207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.609220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.609413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.609426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.609679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.609692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.610008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.610021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.610334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.610347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.610519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.610532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.610778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.610791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.611091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.611104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.611415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.611428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.611651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.611664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.611911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.611924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.612188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.612201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.612495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.612510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.612828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.612842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.613084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.613097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.613255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.613269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.613483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.613496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.613735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.613749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.613992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.614005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.614165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.614179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.614434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.614448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.614687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.614700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.614945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.614959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.179 [2024-07-25 10:44:16.615225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.179 [2024-07-25 10:44:16.615238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.179 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.615511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.615524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.615848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.615862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.616035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.616049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.616231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.616245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.616437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.616451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.616682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.616695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.616950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.616963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.617151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.617164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.617460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.617473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.617740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.617753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.618020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.618033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.618282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.618295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.618485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.618498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.618766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.618779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.619009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.619023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.619271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.619284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.619517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.619530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.619848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.619861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.620029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.620042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.620338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.620351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.620614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.620628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.620873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.620887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.621207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.621221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.621460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.621473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.621724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.621737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.622045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.622058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.622319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.622333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.622576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.622589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.622883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.622899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.623068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.623081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.623319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.623332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.623651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.623665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.623907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.623920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.624162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.624175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.180 [2024-07-25 10:44:16.624431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.180 [2024-07-25 10:44:16.624444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.180 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.624672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.624684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.624851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.624865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.625092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.625105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.625397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.625410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.625662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.625675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.625900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.625914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.626214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.626227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.626488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.626502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.626724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.626737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.626941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.626955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.627261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.627274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.627464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.627478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.627670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.627684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.627943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.627956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.628216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.628229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.628416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.628429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.628743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.628757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.628983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.628996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.629176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.629189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.629504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.629517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.629776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.629789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.630104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.630117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.630360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.630372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.630545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.630559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.630801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.630814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.631110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.631123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.631349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.631362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.631623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.631637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.631794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.631808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.632112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.632125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.632449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.632462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.632650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.632663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.632874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.632888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.633158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.633173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.181 [2024-07-25 10:44:16.633486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.181 [2024-07-25 10:44:16.633499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.181 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.633763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.633776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.633969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.633982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.634221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.634235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.634472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.634485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.634791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.634804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.635044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.635058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.635380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.635394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.635710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.635733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.636047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.636061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.636293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.636305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.636600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.636614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.636856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.636869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.637049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.637063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.637365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.637378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.637656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.637669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.637911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.637924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.638195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.638208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.638431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.638444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.638722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.638736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.638977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.638991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.642726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.642754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.643108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.643124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.643448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.643463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.643708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.643734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.643928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.643942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.644140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.644154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.644371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.644385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.644685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.644699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.644932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.644945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.645188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.645201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.645428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.645443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.645617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.645631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.645891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.645905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.646205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.646219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.646412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.646426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.182 [2024-07-25 10:44:16.646666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.182 [2024-07-25 10:44:16.646680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.182 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.647039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.647053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.647328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.647342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.647581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.647596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.647843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.647857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.648033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.648047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.648247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.648261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.648490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.648504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.648676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.648690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.648929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.648943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.649184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.649198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.649443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.649457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.649724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.649738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.650064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.650078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.650324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.650337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.650556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.650569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.650757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.650771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.651037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.651052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.651220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.651234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.651435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.651447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.651686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.651699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.651937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.651951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.652122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.652136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.652433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.652446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.652786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.652800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.653052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.653066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.653277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.653292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.653496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.653509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.183 qpair failed and we were unable to recover it. 00:29:13.183 [2024-07-25 10:44:16.653677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.183 [2024-07-25 10:44:16.653691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.653948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.653962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.654144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.654158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.654401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.654415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.654593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.654607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.654839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.654852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.655125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.655140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.655393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.655406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.655662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.655676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.655860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.655874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.656173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.656187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.656434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.656448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.656636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.656649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.656812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.656826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.657067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.657081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.657404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.657420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.657583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.657596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.657839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.657854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.658050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.658063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.658304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.658318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.658495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.658509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.658730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.658744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.658927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.658941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.659120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.659133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.659291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.659305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.659474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.659488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.659758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.659774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.660010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.660024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.660256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.660270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.660498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.660512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.660689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.184 [2024-07-25 10:44:16.660703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.184 qpair failed and we were unable to recover it. 00:29:13.184 [2024-07-25 10:44:16.660864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.660879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.661139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.661158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.661435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.661453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.661643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.661660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.661837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.661855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.662053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.662072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.662347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.662364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.662563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.662580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.662818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.662836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.663036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.663054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.663302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.663320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.663580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.663598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.663797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.663816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.663995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.664013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.664180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.664198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.664370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.664387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.664568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.664585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.664823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.664842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.665030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.665048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.665296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.665313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.665500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.665518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.665689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.665706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.665976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.665994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.666215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.666233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.666429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.666450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.666699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.666723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.666920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.666937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.667122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.667140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.667309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.667327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.667564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.667581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.667768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.667786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.667981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.667999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.668283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.668301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.668473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.668491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.668755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.668773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.669022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.669040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.669208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.185 [2024-07-25 10:44:16.669226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.185 qpair failed and we were unable to recover it. 00:29:13.185 [2024-07-25 10:44:16.669410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.669428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.669688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.669706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.669891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.669908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.670020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.670036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.670201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.670219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.670469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.670487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.670663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.670681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.670908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.670926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.671088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.671104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.671351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.671369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.671552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.671569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.671810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.671828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.672068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.672086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.672332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.672349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.672666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.672701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.672879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.672899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.673136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.673153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.673413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.673431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.673607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.673624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.673851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.673869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.674199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.674216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.674424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.674442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.674680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.674697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.674874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.674892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.675139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.675156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.675465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.675482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.675725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.675743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.676158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.676176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.676439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.676456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.676640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.676658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.676966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.676984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.677165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.677182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.677434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.677452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.677702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.677725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.677980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.677997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.678179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.678196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.678462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.678480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.678666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.186 [2024-07-25 10:44:16.678684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.186 qpair failed and we were unable to recover it. 00:29:13.186 [2024-07-25 10:44:16.678886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.678904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.679034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.679051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.679246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.679264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.679460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.679478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.679722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.679740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.679927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.679944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.680259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.680276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.680446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.680463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.680652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.680669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.680843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.680861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.681030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.681047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.681300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.681318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.681648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.681665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.681837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.681855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.682179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.682197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.682507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.682525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.682779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.682797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.683105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.683123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.683361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.683378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.683577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.683594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.683783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.683801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.684127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.684145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.684337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.684354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.684591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.684608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.684890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.684909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.685081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.685099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.685275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.685293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.685626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.685643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.685880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.685898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.686074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.686091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.686346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.686365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.686669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.686687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.686937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.686956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.687193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.687211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.687411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.687428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.687663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.687681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.687914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.687932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.688169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.187 [2024-07-25 10:44:16.688186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.187 qpair failed and we were unable to recover it. 00:29:13.187 [2024-07-25 10:44:16.688363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.688380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.688712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.688743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.688924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.688942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.689204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.689221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.689391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.689408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.689642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.689660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.689929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.689948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.690134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.690151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.690396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.690414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.690587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.690603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.690907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.690925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.691124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.691142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.691375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.691392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.691647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.691665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.691971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.691989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.692283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.692300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.692504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.692522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.692691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.692709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.693005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.693023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.693221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.693240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.693477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.693494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.693805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.693823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.694060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.694077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.694268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.694286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.694553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.694571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.694694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.694711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.694953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.694971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.695237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.695255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.695489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.695506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.695854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.695871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.696144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.696162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.696408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.696426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.188 qpair failed and we were unable to recover it. 00:29:13.188 [2024-07-25 10:44:16.696696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.188 [2024-07-25 10:44:16.696719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.696954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.696972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.697228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.697245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.697447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.697465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.697700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.697723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.697902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.697920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.698170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.698188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.698514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.698532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.698843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.698861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.699030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.699047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.699301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.699318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.699577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.699594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.699865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.699882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.700131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.700148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.700324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.700342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.700521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.700538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.700750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.700768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.701004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.701022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.701207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.701224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.701392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.701409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.701659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.701677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.702024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.702042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.702375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.702392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.702697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.702721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.702840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.702859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.703125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.703142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.703377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.703395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.703647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.703665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.704031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.704067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.704286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.704305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.704557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.704575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.704844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.704864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.705153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.705170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.705342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.705359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.705597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.705614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.705814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.705833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.706076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.706093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.189 [2024-07-25 10:44:16.706397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.189 [2024-07-25 10:44:16.706414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.189 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.706724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.706742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.706927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.706945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.707183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.707201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.707383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.707413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.707722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.707740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.708000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.708017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.708221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.708238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.708357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.708374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.708676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.708694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.708873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.708891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.709124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.709141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.709399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.709417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.709667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.709685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.709939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.709957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.710257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.710275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.710515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.710533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.710730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.710748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.710999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.711016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.711261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.711279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.711538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.711555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.711869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.711887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.712064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.712082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.712335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.712353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.712607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.712625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.712873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.712891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.713145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.713163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.713442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.713460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.713709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.713733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.713911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.713928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.714126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.714144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.714415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.714436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.714688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.714706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.714990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.715008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.715263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.715281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.715591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.715608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.715859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.715877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.716200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.190 [2024-07-25 10:44:16.716217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.190 qpair failed and we were unable to recover it. 00:29:13.190 [2024-07-25 10:44:16.716522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.716541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.716795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.716813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.717148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.717165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.717340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.717358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.717557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.717574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.717828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.717845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.718174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.718192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.718550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.718567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.718820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.718838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.719143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.719161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.719399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.719416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.719604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.719621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.719806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.719824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.720089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.720106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.720368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.720386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.720562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.720580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.720836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.720853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.721173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.721190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.721376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.721394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.721648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.721665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.721917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.721937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.722173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.722190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.722408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.722426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.722662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.722680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.722919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.722937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.723171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.723188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.723363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.723380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.723633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.723651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.723838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.723856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.724102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.724120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.724392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.724409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.724653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.724670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.724925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.724943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.725130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.725147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.725351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.725368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.725626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.725644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.725894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.725912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.726149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.191 [2024-07-25 10:44:16.726166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.191 qpair failed and we were unable to recover it. 00:29:13.191 [2024-07-25 10:44:16.726356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.726373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.726493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.726511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.726748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.726766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.727049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.727067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.727267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.727285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.727632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.727650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.727887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.727905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.728140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.728158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.728352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.728369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.728537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.728557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.728672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.728688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.728929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.728948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.729182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.729199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.729364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.729381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.729627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.729645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.729814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.729831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.730010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.730027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.730352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.730370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.730609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.730626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.730940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.730957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.731144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.731162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.731366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.731384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.731722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.731739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.731854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.731870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.732177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.732194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.732379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.732396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.732728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.732746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.732930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.732948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.733135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.733152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.733390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.733408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.733737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.733755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.733943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.733960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.734176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.734194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.734498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.734516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.734771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.734789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.734972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.734990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.735253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.735275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.735471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.192 [2024-07-25 10:44:16.735489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.192 qpair failed and we were unable to recover it. 00:29:13.192 [2024-07-25 10:44:16.735616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.735633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.735819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.735835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.736101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.736118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.736353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.736371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.736629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.736646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.736824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.736841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.737018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.737036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.737272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.737290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.737636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.737653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.738001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.738019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.738271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.738289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.738593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.738610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.738911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.738947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.739263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.739285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.739458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.739476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.739724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.739742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.740030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.740048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.740346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.740363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.740628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.740645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.740843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.740861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.741032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.741050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.741307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.741325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.741630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.741648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.741754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.741771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.741952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.741969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.742237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.742258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.742494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.742511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.742621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.742638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.742865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.742883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.743187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.743205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.743532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.193 [2024-07-25 10:44:16.743550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.193 qpair failed and we were unable to recover it. 00:29:13.193 [2024-07-25 10:44:16.743813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.743831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.744065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.744082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.744382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.744400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.744655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.744672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.744858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.744874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.745147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.745165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.745469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.745486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.745739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.745757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.746036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.746053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.746289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.746306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.746545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.746562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.746841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.746859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.747055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.747072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.747253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.747271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.747595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.747612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.747915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.747932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.748272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.748289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.748525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.748542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.748739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.748757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.749028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.749045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.749282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.749299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.749650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.749669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.749851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.749872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.750073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.750091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.750344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.750362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.750610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.750628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.750883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.750901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.751097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.751114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.751419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.751436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.751739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.751757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.751992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.752009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.752267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.752284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.752638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.752656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.752971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.752989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.753192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.753210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.753469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.753486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.753742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.753761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.194 [2024-07-25 10:44:16.754083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.194 [2024-07-25 10:44:16.754101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.194 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.754424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.754442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.754770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.754788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.755058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.755075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.755330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.755348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.755671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.755689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.755944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.755961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.756296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.756314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.756643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.756661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.756914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.756932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.757207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.757224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.757567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.757587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.757835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.757863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.758130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.758147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.758478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.758495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.758822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.758841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.759154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.759172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.759359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.759376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.759609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.759627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.759950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.759968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.760294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.760312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.760568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.760585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.760818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.760836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.761103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.761121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.761298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.761316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.761575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.761593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.761895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.761913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.762156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.762173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.762421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.762438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.762764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.762782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.763058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.763076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.763378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.763395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.763581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.763599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.763924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.763942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.764268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.764285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.764592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.764610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.764913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.764932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.765166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.195 [2024-07-25 10:44:16.765184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.195 qpair failed and we were unable to recover it. 00:29:13.195 [2024-07-25 10:44:16.765504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.765524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.765834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.765853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.766090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.766107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.766273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.766291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.766544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.766561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.766803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.766821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.766993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.767011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.767245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.767262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.767590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.767608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.767873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.767892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.768156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.768173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.768425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.768443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.768701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.768723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.769028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.769046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.769282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.769300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.769570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.769588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.769889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.769908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.770210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.770228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.770465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.770483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.770685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.770702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.770996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.771014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.771222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.771240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.771423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.771441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.771678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.771695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.772009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.772027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.772269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.772287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.772488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.772506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.772674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.772695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.773051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.773069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.773269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.773286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.773596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.773613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.773936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.773953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.774133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.774151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.774454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.774472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.774724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.774741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.774992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.775009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.775264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.775282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.775610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.196 [2024-07-25 10:44:16.775627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.196 qpair failed and we were unable to recover it. 00:29:13.196 [2024-07-25 10:44:16.775865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.775883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.776187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.776205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.776535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.776553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.776882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.776900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.777137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.777155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.777480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.777498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.777801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.777818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.778098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.778115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.778325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.778342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.778698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.778727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.778962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.778980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.779239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.779257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.779504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.779521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.779701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.779725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.779976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.779994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.780194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.780211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.780393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.780411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.780599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.780616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.780871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.780889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.781147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.781165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.781426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.781444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.781633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.781651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.781905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.781923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.782176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.782193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.782444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.782462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.782696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.782718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.782942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.782959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.783199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.783216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.783524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.783541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.783866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.783884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.784210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.784228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.784535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.784552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.784886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.784903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.785097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.785114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.785367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.785385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.785712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.785737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.786001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.786019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.786322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.197 [2024-07-25 10:44:16.786339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.197 qpair failed and we were unable to recover it. 00:29:13.197 [2024-07-25 10:44:16.786643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.786660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.786899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.786917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.787249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.787266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.787592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.787609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.787790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.787808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.788043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.788061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.788365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.788383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.788630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.788648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.788974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.788992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.789239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.789257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.789512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.789530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.789768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.789786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.790090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.790108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.790412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.790429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.790683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.790700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.790944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.790962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.791267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.791284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.791520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.791538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.791772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.791790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.792061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.792081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.792343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.792361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.792664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.792681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.793002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.793020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.793344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.793362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.793638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.793656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.794004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.794022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.794372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.794390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.794624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.794642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.198 qpair failed and we were unable to recover it. 00:29:13.198 [2024-07-25 10:44:16.794960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-07-25 10:44:16.794978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.795283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.795300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.795626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.795644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.795949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.795967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.796272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.796289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.796468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.796486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.796740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.796758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.797033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.797051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.797249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.797267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.797530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.797548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.797911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.797929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.798268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.798285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.798588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.798606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.798814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.798832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.799157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.799175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.799481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.799499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.799743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.799762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.800004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.800021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.800278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.800297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.800562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.800580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.800881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.800900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.801202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.801220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.801468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.801486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.801726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.801745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.802006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.802024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.802292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.802310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.802534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.802551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.802806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.802824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.803060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.803078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.803380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.803397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.803564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.803582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.803823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.803841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.804109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-07-25 10:44:16.804126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.199 qpair failed and we were unable to recover it. 00:29:13.199 [2024-07-25 10:44:16.804323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.804340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.804649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.804667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.804941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.804959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.805232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.805250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.805558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.805575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.805823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.805841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.806145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.806162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.806485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.806502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.806691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.806708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.806961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.806978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.807145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.807162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.807394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.807412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.807723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.807742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.808069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.808086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.808293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.808311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.808640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.808658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.808838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.808856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.809125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.809142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.809448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.809465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.809768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.809785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.809986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.810004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.810242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.810260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.810601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.810619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.810879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.810897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.811150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.811168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.811497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.811515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.811756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.811773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.812052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.812069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.812278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.812295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.812578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.812596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.812767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.812785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.813113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.813130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.813435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.813453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.813705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.813727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.813963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.813982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-07-25 10:44:16.814230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-07-25 10:44:16.814248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.814554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.814571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.814769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.814787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.815091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.815109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.815454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.815472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.815776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.815794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.816040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.816058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.816383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.816400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.816698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.816722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.816955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.816973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.817288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.817306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.817557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.817574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.817808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.817825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.818086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.818104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.818432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.818449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.818641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.818659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.818840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.818857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.819134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.819151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.819340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.819360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.819658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.819675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.819980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.819998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.820236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.820253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.820572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.820590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.820828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.820846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.820960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.820976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.821321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.821339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.821643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.821661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.821998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.822016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.822264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.822281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.822545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.822563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.822798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.822816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.823072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.823089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.823345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.823362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.823600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.823618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.823855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.823873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.824200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.824218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-07-25 10:44:16.824534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-07-25 10:44:16.824551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.824879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.824897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.825155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.825173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.825282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.825298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.825549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.825567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.825742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.825759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.826012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.826030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.826296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.826314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.826495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.826513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.826836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.826857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.827132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.827150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.827329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.827347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.827650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.827668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.827925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.827943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.828129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.828147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.828399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.828416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.828661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.828679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.828947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.828965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.829162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.829179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.829427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.829445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.829650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.829667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.829904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.829922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.830181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.830199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.830319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.830336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.830612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.830630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.830955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.830973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.831300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.831317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.831484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.831501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.831807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.831825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.832069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.832086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.832329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.832347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.832582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.832599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.832904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.832922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.833024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.833040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.833305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.833323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.833519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.833536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.833725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-07-25 10:44:16.833746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-07-25 10:44:16.833943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.833961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.834198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.834215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.834465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.834483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.834785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.834803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.835043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.835060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.835298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.835316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.835555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.835572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.835773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.835791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.835973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.835990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.836293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.836310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.836547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.836564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.836834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.836852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.837164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.837182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.837473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.837511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.837709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.837734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.838009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.838027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.838214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.838232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.838534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.838551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.838901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.838919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.839242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.839260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.839588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.839605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.839935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.839953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.840131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.840147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.840400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.840417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.840744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.840762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.841050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.841067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.841271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.841293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.841559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.841578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.841904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.841922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.842121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.842138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.842330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-07-25 10:44:16.842347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-07-25 10:44:16.842652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.842669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.842996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.843013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.843263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.843280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.843399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.843416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.843695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.843718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.844049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.844067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.844301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.844319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.844646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.844664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.844958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.844976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.845159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.845177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.845454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.845472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.845726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.845744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.846000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.846017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.846289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.846306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.846482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.846499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.846773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.846791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.847099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.847116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.847370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.847387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.847724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.847742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.847989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.848007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.848210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.848227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.848484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.848502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.848779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.848798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.848993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.849011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.849215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.849232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.849421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.849438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.849674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.849692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.849946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.849964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.850168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.850186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.850490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.850508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.850812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.850830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.851071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.851089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.851343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.851360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.851710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.851733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.851923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.851941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.852195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.204 [2024-07-25 10:44:16.852213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-07-25 10:44:16.852521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.852539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.852878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.852896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.853189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.853207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.853510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.853528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.853848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.853866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.854123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.854141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.854396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.854414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.854658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.854675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.854958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.854979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.855169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.855187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.855457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.855475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.855661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.855679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.855852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.855870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.856190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.856212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.856462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.856479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.856783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.856800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.856986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.857004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.857332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.857349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.857659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.857677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.857941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.857959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.858231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.858248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.858507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.858525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.858723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.858740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.859001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.859019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-07-25 10:44:16.859140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.205 [2024-07-25 10:44:16.859157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.859406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.859425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.859698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.859723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.859916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.859935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.860212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.860229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.860434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.860451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.860701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.860723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.861057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.861075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.861339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.861357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.861643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.861660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.861916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.861934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.862122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.862140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.862378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.862396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.862526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.862543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.862787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.862805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.862999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.863016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.863284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.863306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.863501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.863517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.863822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.863839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.864140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.864158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.864425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-07-25 10:44:16.864443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-07-25 10:44:16.864614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.864632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.864940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.864958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.865235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.865253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.865520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.865537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.865849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.865867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.866178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.866196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.866432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.866449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.866750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.866768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.866941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.866958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.867145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.867163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.867413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.867431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.867610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.867628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.867936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.867955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.868209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.868227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.868472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.868490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.868737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.868755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.869100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.869118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.869310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.869327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.869584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.869601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.869856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.869874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.870110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.870128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.870408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.870425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.870674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.870698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.870961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.870978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.871234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.871252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.871501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.871518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.871764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.871782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.872066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.872083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.872341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.872359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.872608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.872625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.872794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.872813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.873060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.873078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.873319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.873337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.873585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-07-25 10:44:16.873602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-07-25 10:44:16.873843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.873861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.874113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.874131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.874303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.874321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.874555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.874573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.874826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.874867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.875245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.875285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.875518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.875558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.875942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.875983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.876202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.876242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.876527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.876545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.876800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.876835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.877190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.877231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.877545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.877585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.877972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.878014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.878395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.878437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.878687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.878735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.879000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.879041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.879332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.879372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.879620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.879660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.879907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.879948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.880186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.880227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.880603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.880645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.880946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.880988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.881316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.881359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.881619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.881660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.881970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.882011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.882310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.882351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.882597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.882637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.882894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.882932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.883175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.883192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.883300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.883317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.883424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.883465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.883709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.883761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-07-25 10:44:16.883934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-07-25 10:44:16.883975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.884245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.884263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.884549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.884593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.884831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.884849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.885183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.885223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.885580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.885620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.885869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.885887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.886150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.886167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.886344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.886361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.886633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.886673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.886871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.886889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.887140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.887181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.887470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.887510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.887792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.887810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.888130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.888171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.888457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.888496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.888786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.888833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.889028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.889046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.889257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.889273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.889532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.889549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.889826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.889845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.890205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.890245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.890399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.890439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.890762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.890809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.891133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.891151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.891425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.891443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.891697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.891727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.891905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.891922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.892177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.892194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.892433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.892450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.892627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.892644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.892826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.892844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.893092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.893132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.893501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-07-25 10:44:16.893551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-07-25 10:44:16.893879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.893897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.894146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.894164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.894395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.894436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.894824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.894865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.895169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.895210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.895516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.895558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.895820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.895838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.896086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.896103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.896289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.896307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.896544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.896561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.896867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.896885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.897144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.897185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.897473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.897513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.897772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.897790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.898036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.898054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.898234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.898251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.898448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.898493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.898752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.898794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.899063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.899104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.899338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.899378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.899602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.899642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.899872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.899890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.900213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.900253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.900538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.900578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.900956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.900998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.901350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.901368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.901603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.901620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.901868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.901886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.902135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.902152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.902456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.902496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.902803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.902845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.903199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.903239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.903539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.903579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.903876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-07-25 10:44:16.903894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-07-25 10:44:16.904083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.904101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.904297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.904338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.904559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.904599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.904849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.904890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.905184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.905202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.905441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.905458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.905718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.905736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.905975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.905993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.906237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.906278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.906565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.906605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.906860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.906878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.907053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.907094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.907345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.907385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.907725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.907767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.908148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.908188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.908421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.908462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.908808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.908863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.909114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.909155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.909465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.909505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.909804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.909845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.910063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.910080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.910320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.910337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.910592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.910633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.910956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.910997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.911288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.911327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.911615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.911655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.912022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.912063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.492 [2024-07-25 10:44:16.912298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.492 [2024-07-25 10:44:16.912337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.492 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.912570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.912610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.912965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.913008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.913314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.913354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.913634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.913674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.914053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.914095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.914466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.914507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.914739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.914757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.914988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.915029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.915256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.915297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.915626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.915668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.915858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.915876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.916156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.916173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.916479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.916496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.916857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.916899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.917134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.917174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.917414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.917455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.917835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.917877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.918282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.918322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.918557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.918574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.918801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.918819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.919076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.919116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.919432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.919472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.919767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.919787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.920136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.920154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.920415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.920455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.920680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.920729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.921022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.921040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.921293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.921311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.921647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.921687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.921987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.922029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.922389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.922429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.922759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.922800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.493 [2024-07-25 10:44:16.923017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.493 [2024-07-25 10:44:16.923035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.493 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.923301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.923318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.923509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.923527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.923783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.923824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.924083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.924123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.924382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.924423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.924730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.924771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.925003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.925043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.925338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.925378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.925620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.925660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.925986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.926028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.926316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.926357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.926509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.926549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.926933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.926951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.927078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.927096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.927260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.927277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.927603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.927620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.927912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.927959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.928368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.928409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.928725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.928767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.929013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.929030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.929222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.929239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.929509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.929527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.929806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.929847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.930078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.930118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.930407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.930448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.930749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.930792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.931188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.931206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.931462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.931502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.931804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.931822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.932035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.932052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.932362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.932379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.932630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.932670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.932969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.933010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.494 qpair failed and we were unable to recover it. 00:29:13.494 [2024-07-25 10:44:16.933308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.494 [2024-07-25 10:44:16.933348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.933586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.933628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.933886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.933904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.934205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.934245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.934546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.934587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.934810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.934827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.935071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.935111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.935468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.935508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.935887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.935905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.936152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.936169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.936418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.936438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.936614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.936632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.936870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.936888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.937135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.937152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.937462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.937480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.937734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.937776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.937959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.937999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.938283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.938324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.938699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.938764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.939050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.939068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.939318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.939358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.939673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.939713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.939909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.939950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.940250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.940291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.940652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.940693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.941013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.941030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.941279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.941297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.941624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.941641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.941988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.942006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.942216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.942234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.942573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.942613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.942969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.943011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.943317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.943358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.943724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.943766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.495 qpair failed and we were unable to recover it. 00:29:13.495 [2024-07-25 10:44:16.944050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.495 [2024-07-25 10:44:16.944067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.944425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.944465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.944798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.944839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.945004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.945044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.945352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.945392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.945682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.945732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.946115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.946154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.946546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.946587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.946948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.946990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.947264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.947282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.947623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.947663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.947912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.947954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.948169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.948185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.948434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.948452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.948780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.948798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.949138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.949179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.949410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.949451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.949752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.949771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.950031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.950048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.950286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.950303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.950638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.950679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.950981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.951022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.951309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.951350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.951707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.951757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.952055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.952097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.952379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.952419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.952795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.952836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.953140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.953181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.953605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.953645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.954028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.954047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.954294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.954312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.954628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.954670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.954991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.955034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.955275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.496 [2024-07-25 10:44:16.955316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.496 qpair failed and we were unable to recover it. 00:29:13.496 [2024-07-25 10:44:16.955614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.955655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.956043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.956085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.956461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.956502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.956740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.956782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.957043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.957083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.957390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.957430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.957745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.957763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.958046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.958064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.958322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.958374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.958546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.958586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.958945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.958996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.959303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.959343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.959634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.959675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.959997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.960039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.960330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.960371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.960759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.960777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.960976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.960993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.961235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.961253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.961531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.961571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.961957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.961998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.962295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.962336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.962632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.962672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.962978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.963019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.963376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.963417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.963781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.963823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.964200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.964240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.964564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.964611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.964924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.964965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.965285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.497 [2024-07-25 10:44:16.965325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.497 qpair failed and we were unable to recover it. 00:29:13.497 [2024-07-25 10:44:16.965648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.965695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.965951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.965977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.966226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.966243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.966450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.966468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.966691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.966709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.966961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.966979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.967299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.967316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.967583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.967624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.967871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.967918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.968208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.968248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.968501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.968542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.968825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.968868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.969172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.969212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.969594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.969635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.970044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.970086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.970320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.970360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.970736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.970778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.971151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.971169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.971449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.971489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.971782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.971824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.972184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.972220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.972445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.972486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.972878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.972919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.973274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.973314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.973600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.973640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.973875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.973893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.974143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.974160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.974357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.974374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.974543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.974560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.974871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.974913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.975291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.975330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.975629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.975669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.976009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.976051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.498 [2024-07-25 10:44:16.976426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.498 [2024-07-25 10:44:16.976466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.498 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.976735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.976776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.977091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.977130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.977373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.977391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.977561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.977579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.977705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.977757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.978119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.978160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.978482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.978523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.978852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.978870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.979038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.979056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.979317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.979358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.979736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.979778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.980169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.980210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.980483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.980523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.980823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.980864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.981225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.981266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.981491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.981532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.981916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.981957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.982260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.982301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.982618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.982659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.982967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.982985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.983162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.983180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.983439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.983479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.983781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.983822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.984144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.984185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.984564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.984605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.984899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.984941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.985270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.985311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.985549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.985590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.985914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.985955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.986188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.986229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.986607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.986648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.987078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.987121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.987411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-07-25 10:44:16.987450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.499 qpair failed and we were unable to recover it. 00:29:13.499 [2024-07-25 10:44:16.987733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.987752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.988085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.988126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.988533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.988573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.988807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.988824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.989145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.989186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.989589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.989636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.989941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.989959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.990213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.990230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.990474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.990515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.990829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.990875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.991112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.991129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.991381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.991399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.991727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.991746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.992064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.992105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.992408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.992448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.992742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.992783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.993162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.993203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.993559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.993599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.993976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.994017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.994381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.994422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.994756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.994798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.995108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.995148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.995531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.995571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.995957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.995998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.996222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.996262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.996569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.996609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.996889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.996907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.997241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.997281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.997611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.997652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.997929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.997947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.998196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.998214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.998403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.998421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.998662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.998702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.500 qpair failed and we were unable to recover it. 00:29:13.500 [2024-07-25 10:44:16.999032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-07-25 10:44:16.999073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:16.999317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:16.999334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:16.999654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:16.999671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:16.999859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:16.999882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.000213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.000254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.000635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.000677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.000818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.000836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.001146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.001188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.001493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.001533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.001775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.001817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.002207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.002248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.002625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.002665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.002905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.002946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.003087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.003105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.003436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.003477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.003782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.003824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.004094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.004112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.004366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.004384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.004615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.004633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.004925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.004966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.005321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.005361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.005603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.005644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.005956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.005997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.006376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.006416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.006639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.006680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.007073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.007091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.007401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.007441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.007761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.007779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.008109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.008150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.008455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.008496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.008746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.008793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.009152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.009194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.009510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.009551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.009857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-07-25 10:44:17.009898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.501 qpair failed and we were unable to recover it. 00:29:13.501 [2024-07-25 10:44:17.010128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.010168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.010531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.010572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.010906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.010948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.011325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.011366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.011734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.011776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.012077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.012118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.012412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.012452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.012771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.012813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.013208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.013249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.013634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.013675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.014064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.014107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.014487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.014527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.014855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.014896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.015179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.015196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.015519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.015559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.015849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.015890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.016185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.016226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.016603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.016643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.016949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.016968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.017136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.017154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.017416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.017457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.017744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.017786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.018116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.018156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.018458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.018499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.018675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.018732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.019047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.019089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.019439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.019456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.019718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.019736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.020021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.020061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.020372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.020412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.020767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.020809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.021039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.021079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.021440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.502 [2024-07-25 10:44:17.021480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.502 qpair failed and we were unable to recover it. 00:29:13.502 [2024-07-25 10:44:17.021862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.021904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.022150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.022191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.022488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.022528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.022769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.022810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.023130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.023183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.023488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.023506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.023832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.023850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.024182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.024222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.024463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.024503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.024755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.024797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.025202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.025243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.025532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.025573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.025898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.025939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.026318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.026359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.026752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.026794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.027167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.027185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.027352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.027369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.027624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.027642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.027894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.027912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.028172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.028189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.028527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.028567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.028891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.028933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.029289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.029307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.029557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.029574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.029836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.029878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.030045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.030086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.030446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.030487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.030794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.030836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.031219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.031259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.031664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.503 [2024-07-25 10:44:17.031705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.503 qpair failed and we were unable to recover it. 00:29:13.503 [2024-07-25 10:44:17.032092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.032132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.032503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.032550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.032776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.032819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.033106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.033146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.033437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.033478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.033835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.033878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.034236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.034277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.034633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.034674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.035011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.035053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.035406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.035424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.035756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.035798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.035973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.035991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.036172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.036225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.036521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.036561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.036877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.036919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.037215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.037256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.037477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.037517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.037896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.037937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.038242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.038282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.038594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.038635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.038999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.039017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.039271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.039310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.039682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.039732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.039983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.040024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.040335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.040376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.040684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.040744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.041065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.041105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.041486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.041527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.041853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.041900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.042281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.042298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.042472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.042490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.042751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.042793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.043085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.043126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.504 [2024-07-25 10:44:17.043484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.504 [2024-07-25 10:44:17.043524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.504 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.043855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.043897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.044190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.044208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.044518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.044559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.044877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.044919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.045219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.045237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.045540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.045557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.045758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.045776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.045987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.046004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.046251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.046268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.046556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.046596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.046853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.046895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.047134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.047152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.047390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.047431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.047728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.047770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.048072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.048112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.048326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.048344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.048592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.048610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.048947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.048988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.049351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.049391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.049766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.049808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.050140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.050180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.050570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.050610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.050904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.050946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.051262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.051303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.051684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.051733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.052049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.052089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.052371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.052389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.052734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.052776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.053081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.053134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.053439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.053456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.053705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.053727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.054008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.054049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.054280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.505 [2024-07-25 10:44:17.054320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-07-25 10:44:17.054583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.054624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.054991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.055034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.055439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.055456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.055692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.055710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.056046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.056088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.056451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.056491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.056748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.056790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.057181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.057221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.057512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.057552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.057925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.057966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.058134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.058152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.058463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.058503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.058880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.058921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.059332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.059372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.059667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.059707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.060032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.060073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.060459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.060500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.060808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.060850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.061167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.061207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.061538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.061578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.061941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.061982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.062339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.062379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.062758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.062800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.063176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.063216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.063566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.063584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.063770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.063789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.064059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.064077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.064385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.064426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.064804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.064844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.065231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.065276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.065584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.065625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.065876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.065917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.066211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.066251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-07-25 10:44:17.066492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.506 [2024-07-25 10:44:17.066532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.066921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.066962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.067315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.067356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.067672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.067713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.068034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.068075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.068369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.068410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.068804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.068845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.069175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.069192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.069392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.069410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.069671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.069711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.069986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.070028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.070270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.070321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.070553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.070570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.070831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.070849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.071088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.071107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.071367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.071420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.071664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.071704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.072073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.072114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.072493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.072533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.072916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.072958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.073339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.073379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.073669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.073710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.073895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.073937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.074228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.074273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.074655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.074696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.074942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.074982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.075288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.075329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.075663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.075704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.076099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.076140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.076518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.076559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.076863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.076905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.077186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.077240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.077547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.077588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.077969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.078010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.507 [2024-07-25 10:44:17.078252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.507 [2024-07-25 10:44:17.078293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.507 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.078693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.078744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.079151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.079192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.079586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.079627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.079983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.080024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.080384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.080402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.080637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.080654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.080841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.080859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.081189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.081229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.081626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.081666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.081915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.081956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.082289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.082329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.082581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.082623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.082936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.082978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.083290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.083330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.083619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.083659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.083973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.084020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.084256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.084296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.084587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.084628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.085004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.085046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.085343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.085360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.085598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.085615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.085807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.085825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.086086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.086104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.086340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.086358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.086543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.086560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.086866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.086884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.087160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.087178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.087416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.087434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.087686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.508 [2024-07-25 10:44:17.087735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.508 qpair failed and we were unable to recover it. 00:29:13.508 [2024-07-25 10:44:17.088106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.088147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.088462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.088480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.088829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.088848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.089146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.089199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.089437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.089478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.089771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.089813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.090163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.090181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.090458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.090475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.090736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.090778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.091139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.091180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.091548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.091565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.091892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.091910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.092170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.092210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.092586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.092627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.092881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.092923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.093227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.093267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.093567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.093608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.093895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.093937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.094319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.094359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.094648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.094689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.094951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.094992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.095282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.095324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.095597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.095615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.095816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.095834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.095943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.095961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.096297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.096337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.096708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.096756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.097125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.097166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.097529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.097570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.097951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.097992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.098311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.098351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.098663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.098703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.099043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.099084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.509 [2024-07-25 10:44:17.099384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.509 [2024-07-25 10:44:17.099402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.509 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.099741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.099782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.100004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.100045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.100351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.100393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.100725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.100767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.101144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.101185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.101492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.101533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.101826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.101869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.102100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.102151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.102385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.102402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.102594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.102612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.102866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.102908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.103267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.103307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.103664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.103705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.104003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.104044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.104365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.104382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.104721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.104740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.105072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.105113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.105411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.105452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.105762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.105805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.106027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.106067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.106448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.106493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.106849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.106891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.107208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.107226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.107474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.107511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.107899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.107941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.108237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.108278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.108658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.108698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.108933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.108975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.109264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.109304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.109585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.109603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.109857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.109874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.110139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.110179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.110418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.510 [2024-07-25 10:44:17.110459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.510 qpair failed and we were unable to recover it. 00:29:13.510 [2024-07-25 10:44:17.110838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.110879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.111265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.111306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.111698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.111748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.112041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.112081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.112478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.112495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.112682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.112700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.112887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.112905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.113162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.113204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.113570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.113611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.113921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.113962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.114189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.114207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.114536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.114576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.114900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.114942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.115163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.115181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.115557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.115604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.115860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.115903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.116223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.116241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.116473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.116492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.116769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.116787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.117017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.117055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.117459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.117499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.117740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.117782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.118085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.118125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.118439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.118480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.118863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.118904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.119282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.119323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.119627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.119668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.119989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.120030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.120267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.120309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.120528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.120569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.120857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.120898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.121281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.121323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.121655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.121696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.511 [2024-07-25 10:44:17.121999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.511 [2024-07-25 10:44:17.122040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.511 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.122327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.122367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.122682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.122734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.123055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.123096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.123410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.123451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.123808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.123850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.124164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.124204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.124438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.124479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.124790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.124833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.125148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.125189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.125462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.125480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.125743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.125761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.125977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.126017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.126316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.126357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.126615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.126633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.126872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.126889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.127217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.127258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.127625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.127666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.128034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.128076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.128361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.128402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.128725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.128766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.129088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.129129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.129493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.129534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.129756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.129798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.130157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.130198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.130425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.130466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.130826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.130868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.131160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.131202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.131554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.131572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.131880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.131898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.132157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.132199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.132604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.132644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.132967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.133009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.133335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.133377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.512 [2024-07-25 10:44:17.133595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.512 [2024-07-25 10:44:17.133635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.512 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.134031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.134074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.134438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.134479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.134837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.134879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.135193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.135234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.135540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.135581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.135832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.135873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.136187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.136228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.136468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.136509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.136887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.136928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.137235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.137276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.137524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.137541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.137781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.137799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.138055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.138100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.138394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.138434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.138734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.138786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.139113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.139155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.139404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.139444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.139759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.139801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.140109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.140151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.140477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.140517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.140805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.140848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.141141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.141182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.141422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.141462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.141844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.141886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.142175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.142216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.142562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.142603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.142927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.142968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.143193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.143210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.513 qpair failed and we were unable to recover it. 00:29:13.513 [2024-07-25 10:44:17.143579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.513 [2024-07-25 10:44:17.143620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.143998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.144040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.144390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.144407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.144649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.144666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.144991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.145009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.145207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.145248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.145501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.145542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.145850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.145891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.146049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.146089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.146383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.146424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.146741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.146759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.146996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.147013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.147330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.147370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.147673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.147729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.148038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.148079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.148439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.148480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.148770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.148810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.149183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.149223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.149613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.149630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.149938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.149979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.150246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.150283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.150446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.150486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.150844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.150885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.151188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.151229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.151612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.151653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.151953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.151994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.152209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.152226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.152519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.152560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.152975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.153017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.153346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.153388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.153770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.153811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.154054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.154094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.154475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.154517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.154878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.514 [2024-07-25 10:44:17.154920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.514 qpair failed and we were unable to recover it. 00:29:13.514 [2024-07-25 10:44:17.155288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.155337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.155653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.155693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.155995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.156036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.156289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.156330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.156630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.156670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.157065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.157106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.157511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.157558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.157862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.157903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.158191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.158232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.158564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.158606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.158988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.159029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.159415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.159456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.159796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.159836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.160219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.160260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.160564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.160605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.161012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.161054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.161358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.161376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.161635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.161653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.161905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.161956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.162262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.162303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.162600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.162642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.162871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.162914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.163201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.163241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.163559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.163600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.163848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.163890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.164195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.164245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.164517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.164535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.164786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.164827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.165049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.165089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.165450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.165490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.165799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.165844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.166152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.166192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.166501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.166542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.515 qpair failed and we were unable to recover it. 00:29:13.515 [2024-07-25 10:44:17.166910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.515 [2024-07-25 10:44:17.166952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.516 qpair failed and we were unable to recover it. 00:29:13.516 [2024-07-25 10:44:17.167193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.516 [2024-07-25 10:44:17.167234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.516 qpair failed and we were unable to recover it. 00:29:13.516 [2024-07-25 10:44:17.167588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.516 [2024-07-25 10:44:17.167606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.516 qpair failed and we were unable to recover it. 00:29:13.516 [2024-07-25 10:44:17.167804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.516 [2024-07-25 10:44:17.167821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.516 qpair failed and we were unable to recover it. 00:29:13.516 [2024-07-25 10:44:17.168070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.516 [2024-07-25 10:44:17.168088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.516 qpair failed and we were unable to recover it. 00:29:13.516 [2024-07-25 10:44:17.168449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.516 [2024-07-25 10:44:17.168490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.516 qpair failed and we were unable to recover it. 00:29:13.516 [2024-07-25 10:44:17.168675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.516 [2024-07-25 10:44:17.168738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.516 qpair failed and we were unable to recover it. 00:29:13.516 [2024-07-25 10:44:17.169074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.516 [2024-07-25 10:44:17.169115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.516 qpair failed and we were unable to recover it. 00:29:13.516 [2024-07-25 10:44:17.169495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.169536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.169921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.169965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.170214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.170255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.170616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.170657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.171028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.171068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.171274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.171292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.171539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.171557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.171748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.171789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.172050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.172090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.172392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.172433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.172753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.172795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.173059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.173099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.173345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.173395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.173736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.173778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.174176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.174217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.174505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.174523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.174846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.174888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.175128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.175146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.175331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.175348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.175632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.175672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.176005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.176048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.176406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.176447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.176750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.176792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.177105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.177146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.177440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.177480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.177858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-07-25 10:44:17.177899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-07-25 10:44:17.178334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.178352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.178630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.178647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.178831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.178849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.179138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.179156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.179416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.179434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.179665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.179707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.180025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.180066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.180361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.180407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.180701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.180722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.181022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.181039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.181397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.181438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.181794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.181836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.182088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.182129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.182452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.182492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.182786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.182828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.183180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.183219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.183399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.183417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.183748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.183767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.184020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.184037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.184270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.184288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.184566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.184584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.184938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.184957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.185192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.185209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.185534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.185552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.185800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.185819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.186064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-07-25 10:44:17.186082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-07-25 10:44:17.186407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.186425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.186677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.186695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.187004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.187023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.187214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.187231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.187558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.187576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.187766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.187784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.188017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.188035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.188268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.188285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.188489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.188509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.188698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.188720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.189071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.189088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.189266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.189283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.189544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.189561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.189841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.189859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.190043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.190061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.190321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.190338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.190607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.190624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.190967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.190985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.191238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.191255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.191557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.191574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.191849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.191867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.192126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.192143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.192474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.192491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.192795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.192813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.193048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.193066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.193375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.193415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-07-25 10:44:17.193661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-07-25 10:44:17.193701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.194103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.194144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.194456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.194497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.194789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.194830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.195161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.195202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.195593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.195633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.196056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.196098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.196442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.196482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.196703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.196753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.196989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.197030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.197414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.197456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.197763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.197782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.198063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.198103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.198395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.198435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.198738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.198756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.199002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.199020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.199340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.199358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.199626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.199667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.200003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.200046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.200375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.200416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.200726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.200768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.201018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.201060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.201342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.201360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.201612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.201630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.201807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.201825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.202161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.202201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-07-25 10:44:17.202488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-07-25 10:44:17.202528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.202848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.202890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.203270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.203311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.203691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.203742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.203982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.204022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.204302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.204353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.204687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.204735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.205026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.205067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.205334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.205352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.205669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.205709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.206136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.206177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.206503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.206544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.206883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.206934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.207290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.207331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.207617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.207635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.207911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.207952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.208269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.208309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.208597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.208637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.208869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.208911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.209295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.209335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.209691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.209743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.210059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.210110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.210446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.210487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.210732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.210775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.211060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.211105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.211409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.211449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.211806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.211847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.212145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.212185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.212465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.212515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-07-25 10:44:17.212807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-07-25 10:44:17.212848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.213147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.213188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.213482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.213523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.213824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.213843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.214122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.214163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.214545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.214586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.214960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.215002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.215378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.215419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.215736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.215778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.216103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.216152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.216403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.216441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.216765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.216807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.217134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.217174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.217561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.217600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.217943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.217985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.218287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.218328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.218704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.218727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.219048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.219066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.219390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.219429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.219602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.219643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.219903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.219944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.220177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.220218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.220583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.220628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.221013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.221055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.221414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.221455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.221700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.221722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.221996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.222037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.222259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.222299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-07-25 10:44:17.222632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-07-25 10:44:17.222650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.222889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.222907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.223088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.223106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.223440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.223482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.223701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.223753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.224002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.224042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.224284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.224324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.224702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.224756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.225063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.225104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.225350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.225391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.225683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.225748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.225970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.226012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.226303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.226344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.226655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.226696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.226936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.226977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.227271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.227311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.227700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.227753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.228092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.228132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.228421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.228462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.228731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.228772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.229152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.229193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.229570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.229616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.229903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-07-25 10:44:17.229921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-07-25 10:44:17.230171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.230209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.230597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.230638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.230951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.230969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.231228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.231277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.231530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.231570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.231863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.231904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.232149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.232190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.232573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.232614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.232982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.233024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.233337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.233377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.233752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.233786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.234074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.234115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.234499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.234541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.234843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.234885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.235148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.235188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.235526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.235566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.235895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.235937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.236311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.236352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.236664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.236682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.237035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.237077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.237367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.237407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.237770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.237813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.238119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.238160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.238419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.238437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.238709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.238731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.239011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.239052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.239329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.239370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.239767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.239808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.240204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.240245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.240611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.240628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-07-25 10:44:17.240944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-07-25 10:44:17.240985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.241296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.241337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.241704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.241754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.242162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.242202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.242579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.242619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.242911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.242929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.243266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.243306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.243633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.243673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.244069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.244111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.244414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.244456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.244859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.244900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.245288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.245328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.245707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.245766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.246100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.246141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.246432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.246474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.246783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.246826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.247119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.247160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.247518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.247558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.247782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.247800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.248082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.248123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.248426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.248467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.248829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.248870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.249199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.249239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.249611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.249652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.250056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.250097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.250358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.250399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.250795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.250837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.251216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.251256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.251614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.251655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.252023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.252041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.252221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-07-25 10:44:17.252239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-07-25 10:44:17.252575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.252615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.252917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.252959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.253315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.253357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.253535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.253576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.253950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.253969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.254158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.254178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.254434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.254480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.254860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.254902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.255218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.255258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.255649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.255690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.256007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.256048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.256349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.256389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.256744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.256763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.256959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.256999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.257304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.257344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.257579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.257620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.257960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.258002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.258402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.258443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.258796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.258837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.259165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.259206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.259563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.259603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.259906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.259948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.260241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.260282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.260641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.260682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.261019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.261060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.261416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.261457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.261863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.261905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.262158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.262199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.262435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.262453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.262739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.262782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-07-25 10:44:17.263116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-07-25 10:44:17.263156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.263379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.263397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.263645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.263665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.263984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.264025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.264257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.264298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.264589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.264629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.264865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.264906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.265200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.265241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.265560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.265599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.265833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.265851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.266182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.266223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.266602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.266643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.266948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.266966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.267295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.267336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.267690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.267742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.268031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.268049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.268170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.268187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.268512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.268529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.268787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.268805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.269142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.269182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.269490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.269531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.269832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.269850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.270084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.270102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.270351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.270391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.270728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.270770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.271077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.271117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.271445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.271485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.271844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.271886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.272188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.272228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.272537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.272577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-07-25 10:44:17.272956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-07-25 10:44:17.272999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.273326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.273367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.273694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.273744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.273983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.274025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.274328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.274369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.274776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.274818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.275118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.275158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.275461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.275502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.275753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.275794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.276176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.276216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.276593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.276633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.277023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.277065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.277306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.277346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.277764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.277823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.278128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.278169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.278410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.278450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.278729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.278747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.279003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.279050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.279350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.279391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.279771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.279812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.280126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.280168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.280486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.280528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.280818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.280859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.281171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.281212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.281451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.281491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.281843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.281862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.282128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.282168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.282397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.282438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.282756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.282774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.283011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.283029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.283354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.283372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.283636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.283676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.284065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-07-25 10:44:17.284106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-07-25 10:44:17.284503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.284544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.284843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.284862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.285211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.285251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.285495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.285537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.285854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.285872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.286147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.286164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.286402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.286420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.286660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.286680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.286786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.286804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.287078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.287096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.287273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.287290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.287599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.287640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.287894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.287935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.288293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.288334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.288634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.288682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.288938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.288956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.289207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.289245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-07-25 10:44:17.289513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-07-25 10:44:17.289553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.289928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.289946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.290262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.290302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.290671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.290712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.291009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.291027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.291360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.291399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.291686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.291734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.291956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.291998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.292287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.292327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.292578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.292628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.292735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.292754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.293063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.293103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.293259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.293300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.293634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.293674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.294091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.294171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.294436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.294481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.294837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.294881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.295109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.295159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.295541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.295582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.295883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.295901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.296139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.296156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.296393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.296410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.296771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.296813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.297066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.297107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.297446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.297487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.297801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.297843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.298019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.298060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.298418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.298459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.298840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.298882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.299202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.299243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.299536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.299577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.299977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.300020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-07-25 10:44:17.300321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-07-25 10:44:17.300362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.300769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.300811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.301073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.301114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.301418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.301458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.301764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.301782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.302042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.302059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.302250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.302267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.302512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.302530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.302772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.302790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.303029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.303047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.303333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.303350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.303673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.303691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.303935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.303953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.304201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.304219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.304469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.304487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.304834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.304877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.305122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.305162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.305464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.305505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.305809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.305851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.306210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.306251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.306545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.306586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.306972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.307014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.307322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.307363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.307757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.307775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.308033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.308073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.308310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.308363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.308580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.308597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.308796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.308814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.309127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.309168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.309461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.309502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.309820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.309838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.310108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.310149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.310460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-07-25 10:44:17.310501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-07-25 10:44:17.310837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.310879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.311259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.311300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.311630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.311670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.312005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.312023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.312284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.312336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.312663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.312704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.313094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.313112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.313359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.313377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.313619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.313636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.313964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.313982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.314291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.314308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.314622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.314663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.315058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.315100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.315415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.315456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.315681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.315732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.316098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.316139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.316501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.316542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.316775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.316817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.317129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.317170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.317469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.317510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.317812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.317854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.318144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.318185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.318485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.318526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.318901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.318919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.319194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.319236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.319462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.319503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.319848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.319890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.320181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.320222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.320390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.320431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.320747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.320789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.321115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.321155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.321536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.321576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.321881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.321928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-07-25 10:44:17.322332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-07-25 10:44:17.322373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.322592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.322610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.322990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.323031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.323273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.323314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.323632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.323672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.324079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.324121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.324413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.324454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.324763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.324781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.325042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.325060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.325361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.325378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.325708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.325757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.326070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.326110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.326361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.326402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.326668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.326686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.326939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.326957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.327190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.327208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.327462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.327502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.327736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.327778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.328112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.328152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.328509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.328550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.328875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.328916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.329206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.329246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.329611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.329652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.329941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.329959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.330205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.330223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.330470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.330505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.330827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.330870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.331228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.331269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.331585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.331626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.331933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.331976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.332355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.332396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.332754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.332796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.333171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.333212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.333445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.333486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.333796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.333838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.334144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.334162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-07-25 10:44:17.334402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-07-25 10:44:17.334420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.334752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.334793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.335103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.335144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.335443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.335490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.335862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.335881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.336198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.336215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.336539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.336580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.336968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.337010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.337366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.337408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.337791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.337832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.338186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.338227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.338532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.338573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.338928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.338946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.339290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.339330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.339691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.339743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.340119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.340137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.340501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.340519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.340857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.340899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.341284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.341324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.341614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.341654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.341972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.341990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.342247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.342265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.342510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.342527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.342837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.342879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.343182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.343223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.343480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.343521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.343760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.343778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.344038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.344056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.344362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.344380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.344625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.344666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.345042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-07-25 10:44:17.345084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-07-25 10:44:17.345376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.345416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.345742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.345785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.346075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.346093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.346420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.346437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.346803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.346821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.347126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.347143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.347470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.347487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.347813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.347854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.348146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.348187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.348490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.348531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.348827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.348868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.349223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.349241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.349595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.349642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.349964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.350005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.350307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.350347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.350652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.350669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.350997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.351015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.351227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.351267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.351652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.351693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.351961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.351979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.352316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.352356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.352675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.352723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.353007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.353024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.353273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.353290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.353466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.353484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.353679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.353723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.354109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.354150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.354455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.354495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.354846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.354887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.355269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.355309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.355606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.355646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.355978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.356020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.356341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.356381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.356687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.356735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.357055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.357097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-07-25 10:44:17.357403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-07-25 10:44:17.357443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.357749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.357791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.358120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.358160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.358554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.358595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.358847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.358866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.359202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.359243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.359480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.359521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.359903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.359947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.360180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.360221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.360599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.360639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.361030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.361072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.361453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.361493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.361799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.361840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.362163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.362205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.362511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.362552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.362949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.362991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.363295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.363335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.363646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.363692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.364021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.364063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.364362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.364404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.364694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.364743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.365034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.365075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.365300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.365341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.365592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.365632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.365950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.365991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.366282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.366322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.366568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.366608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.366918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.366960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.367241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.367258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.367621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.367661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.367908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.367950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.368252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.368293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.368526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.368574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.368851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.368893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.369199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.369239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.369632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-07-25 10:44:17.369673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-07-25 10:44:17.370051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.370092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.370354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.370394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.370728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.370770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.371098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.371139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.371517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.371558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.371917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.371960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.372333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.372373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.372749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.372790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.373172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.373213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.373593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.373641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.373840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.373858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.374181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.374221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.374576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.374616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.374896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.374914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.375153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.375171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.375481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.375521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.375814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.375856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.376149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.376167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.376454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.376495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.376797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.376839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.377216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.377234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.377524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.377545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.377802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.377820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.378078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.378118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.378450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.378491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.378874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.378892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.379218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.379235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.379483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.379523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.379908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.379950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.380261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.380279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.380605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.380622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.380877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.380919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.381304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.381345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.381749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.381790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.382113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.382131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.382469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.382510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.382815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.382857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.383244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-07-25 10:44:17.383285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-07-25 10:44:17.383641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.383681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.383935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.383953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.384198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.384215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.384339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.384356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.384657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.384706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.385059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.385101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.385339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.385380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.385689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.385740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.385995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.386036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.386339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.386379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.386699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.386753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.387106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.387158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.387447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.387488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.387815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.387833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.388134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.388151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.388406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.388424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.388679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.388729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.389057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.389075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.389378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.389396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.389582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.389599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.389855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.389873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.390205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.390246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.390553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.390593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.390959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.390979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.391234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.391277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.391656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.391697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.392054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.392072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.392323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.392362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.392733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.392775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.393082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.393123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.393425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.393466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.393695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.393745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.394133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.394175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.394475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.394516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.394884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.394927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.395160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.395178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.395441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-07-25 10:44:17.395481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-07-25 10:44:17.395848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.395890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.396200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.396241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.396565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.396607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.396930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.396972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.397290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.397307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.397656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.397673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.397986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.398028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.398402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.398443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.398754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.398796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.399215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.399257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.399616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.399655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.399990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.400031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.400344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.400385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.400773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.400816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.401130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.401170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.401465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.401505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.401801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.401842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.402101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.402142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.402409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.402448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.402792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.402834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.403145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.403162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.403487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.403528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.403910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.403951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.404312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.404353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.404660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.404702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.405046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.405087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.405450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.405491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.405792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.405833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.406088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.406129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.406440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.406481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.406790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.406832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.407056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.407097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-07-25 10:44:17.407408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-07-25 10:44:17.407449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.407765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.407818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.408079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.408129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.408439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.408480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.408724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.408767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.409074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.409114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.409469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.409510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.409829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.409880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.410227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.410268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.410557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.410598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.410913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.410932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.411222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.411262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.411505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.411545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.411837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.411855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.412121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.412162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.412525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.412566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.412865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.412883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.413086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.413104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.413447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.413465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.413741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.413758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.413957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.413974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.414241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.414261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.414595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.414635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.414967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.415010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.415390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.415431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.415762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.415805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.416176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.416217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.416531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.416572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.416931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.416973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.417271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.417288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.417617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.417658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.418075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.418117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.418497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.418538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.418895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.418936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.419226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.419266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.419653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.419694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.420033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.420075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.420370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.420411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-07-25 10:44:17.420789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-07-25 10:44:17.420831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.421197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.421214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.421529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.421570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.421872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.421913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.422264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.422282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.422670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.422711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.423044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.423061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.423259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.423276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.423590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.423642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.423915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.423934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.424193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.424212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.424516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.424533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.424703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.424728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.424913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.424931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.425166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.425183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.425437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.425454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.425765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.425806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.426112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.426153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.426459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.426491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.426756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.426774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.427028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.427045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.427379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.427421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.427729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.427770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.428042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.428083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.428413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.428454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.428747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.428789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.429153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.429194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.429555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.429595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.429977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.430020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.430295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.430312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.430548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.430565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.430876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.430918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.431171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.431212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.431459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.431499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.431787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.431829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.432190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.432231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.432563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.432604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.432988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.433006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.433276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.433294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.433625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.433666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-07-25 10:44:17.434069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-07-25 10:44:17.434111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.434510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.434550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.434929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.434971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.435350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.435390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.435723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.435765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.436096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.436138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.436476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.436517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.436758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.436805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.437040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.437057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.437226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.437244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.437590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.437631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.437973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.438015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.438416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.438434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.438691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.438709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.438995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.439036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.439426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.439468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.439779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.439822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.440159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.440200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.440583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.440623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.441002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.441044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.441402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.441444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.441815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.441856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.442171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.442211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.442518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.442564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.442931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.442973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.443343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.443383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.443625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.443665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.444056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.444098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.444288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.444306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.444543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.444561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.444854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.444897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.445202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.445243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.445537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.445578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.445890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.445932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.446233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.446250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.446606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.446647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.446904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.446921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.447240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.447281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.447667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-07-25 10:44:17.447708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-07-25 10:44:17.448030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.448071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.448459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.448477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.448801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.448819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.449151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.449192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.449581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.449621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.449908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.449926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.450174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.450192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.450502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.450543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.450852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.450893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.451198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.451238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.451619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.451660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.451982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.452035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.452148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.452166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.452450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.452491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.452855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.452896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.453167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.453213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.453571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.453612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.453990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.454032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.454405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.454446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.454796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.454837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.455144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.455162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.455409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.455426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.455617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.455634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.455942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.455960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.456207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.456226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.456584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.456625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.456956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.456998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.457345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.457386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.457641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.457682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.457992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.458033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.458398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.458439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.458747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.458789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.459091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.459143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.459523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-07-25 10:44:17.459564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-07-25 10:44:17.459822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.459863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.460173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.460214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.460592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.460633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.460865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.460908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.461277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.461319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.461682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.461735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.462027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.462068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.462452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.462493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.462873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.462915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.463241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.463259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.463547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.463588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.463831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.463874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.464164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.464205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.464604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.464645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.464940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.464982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.465365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.465406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.465653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.465694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.466113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.466155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.466448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.466489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.466867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.466908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.467212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.467253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.467611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.467652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.467977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.468019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.468321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.468363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.468615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.468656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.468942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.468959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.469196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.469214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.469535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.469575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.469880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.469922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.470327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.470367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.470620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.470666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.470903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.470946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.471240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.471281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.471665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.471706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.472045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.472087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.472385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.472403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.472705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.472728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.472987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.473028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.473281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-07-25 10:44:17.473322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-07-25 10:44:17.473726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.473768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.474133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.474174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.474495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.474535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.474861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.474912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.475224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.475265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.475602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.475643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.476037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.476079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.476377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.476394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.476638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.476679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.476999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.477040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.477301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.477319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.477506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.477524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.477820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.477861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.478213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.478231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.478499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.478516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.478832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.478850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.479027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.479068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.479428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.479469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.479832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.479874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.480265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.480306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.480666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.480707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.481075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.481116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.481417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.481457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.481762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.481804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.481949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.481966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.482238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.482279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.482650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.482691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.483110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.483152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.483503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.483520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.483867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.483885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.484081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.484122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.484423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.484470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.484736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.484778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.485176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.485217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.485438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.485479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.485921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.485963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.486347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.486364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.486689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.486706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.486961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.487003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-07-25 10:44:17.487385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-07-25 10:44:17.487425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.487755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.487798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.488162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.488204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.488511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.488552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.488911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.488952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.489204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.489245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.489559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.489601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.489910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.489951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.490266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.490306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.490733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.490775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.491154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.491196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.491523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.491563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.491808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.491849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.492232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.492273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.492566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.492607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.492925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.492966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.493368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.493409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.493710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.493777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.494072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.494114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.494494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.494536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.494833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.494875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-07-25 10:44:17.495176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-07-25 10:44:17.495217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:14.109 [2024-07-25 10:44:17.495556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.109 [2024-07-25 10:44:17.495598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.109 qpair failed and we were unable to recover it. 00:29:14.109 [2024-07-25 10:44:17.495904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.109 [2024-07-25 10:44:17.495944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.109 qpair failed and we were unable to recover it. 00:29:14.109 [2024-07-25 10:44:17.496309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.109 [2024-07-25 10:44:17.496350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.109 qpair failed and we were unable to recover it. 00:29:14.109 [2024-07-25 10:44:17.496737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.109 [2024-07-25 10:44:17.496779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.109 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.497171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.497212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.497545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.497586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.497879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.497921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.498233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.498250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.498575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.498616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.498938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.498979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.499274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.499294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.499644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.499684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.500074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.500117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.500363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.500381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.500660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.500701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.501012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.501052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.501347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.501392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.501770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.501812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.502111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.502129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.502417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.502457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.502844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.502885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.503124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.503142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.503449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.503489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.503777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.503819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.504115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.504155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.504400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.504441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.504819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.504861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.505238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.505278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.505635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.505676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.506088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.506129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.506466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.506484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.506725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.506743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.506989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.507007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.507348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.507389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.507744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.507786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.508128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.508168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.508571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.508611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.110 qpair failed and we were unable to recover it. 00:29:14.110 [2024-07-25 10:44:17.508927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.110 [2024-07-25 10:44:17.508970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.509356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.509396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.509709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.509772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.510156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.510197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.510477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.510495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.510747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.510789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.511168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.511209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.511591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.511632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.512032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.512074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.512403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.512444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.512827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.512869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.513161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.513179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.513357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.513375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.513686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.513742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.514056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.514097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.514425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.514466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.514846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.514887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.515091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.515108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.515446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.515486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.515788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.515829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.516118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.516159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.516528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.516569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.516951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.516992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.517305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.517345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.517658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.517699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.518088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.518130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.518532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.518573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.518894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.518936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.519308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.519326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.519637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.519678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.519929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.519971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.520345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.520385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.520727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.520769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.111 [2024-07-25 10:44:17.521013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.111 [2024-07-25 10:44:17.521054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.111 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.521438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.521479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.521838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.521880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.522070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.522088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.522257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.522298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.522672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.522713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.522958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.523000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.523363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.523404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.523782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.523824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.524204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.524244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.524547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.524588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.524901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.524943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.525269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.525309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.525616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.525656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.526026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.526068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.526450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.526490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.526876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.526917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.527211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.527252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.527667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.527707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.528073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.528114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.528441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.528461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.528794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.528836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.529152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.529193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.529555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.529595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.530015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.530057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.530405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.530422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.530685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.530736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.531061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.531102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.531391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.531431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.531672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.531713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.532134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.532175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.532499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.532517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.532851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.532893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.533242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.112 [2024-07-25 10:44:17.533260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.112 qpair failed and we were unable to recover it. 00:29:14.112 [2024-07-25 10:44:17.533586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.533604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.533810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.533828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.534077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.534094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.534431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.534472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.534819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.534860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.535176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.535216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.535522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.535564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.535854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.535896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.536275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.536316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.536699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.536752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.537130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.537170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.537471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.537488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.537766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.537808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.538154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.538195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.538449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.538490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.538871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.538914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.539159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.539199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.539570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.539610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.539942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.539984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.540270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.540288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.540600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.540641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.541009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.541051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.541335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.541353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.541600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.541618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.541799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.541817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.542177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.542195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.542525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.542544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.542850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.542868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.543107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.543124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.543379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.543397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.543708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.543759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.544055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.544096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.544401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.544442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.113 qpair failed and we were unable to recover it. 00:29:14.113 [2024-07-25 10:44:17.544690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.113 [2024-07-25 10:44:17.544744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.545063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.545103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.545408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.545449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.545796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.545837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.546282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.546323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.546684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.546733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.547137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.547177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.547394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.547412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.547733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.547774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.548133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.548173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.548544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.548561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.548880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.548898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.549140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.549181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.549473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.549514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.549892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.549934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.550303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.550344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.550711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.550760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.551049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.551090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.551491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.551532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.551702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.551753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.552161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.552202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.552494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.552535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.552894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.552936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.553169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.553210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.553526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.553566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.553829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.553870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.554172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.554189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.554453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.554492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.554801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.554843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.555168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.555209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.555590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.555631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.114 [2024-07-25 10:44:17.555944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.114 [2024-07-25 10:44:17.555987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.114 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.556207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.556247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.556619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.556665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.556912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.556953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.557243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.557284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.557681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.557752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.558080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.558121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.558418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.558436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.558790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.558832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.559228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.559269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.559627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.559667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.560023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.560064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.560373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.560414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.560706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.560755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.561043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.561084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.561370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.561411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.561822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.561864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.562270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.562311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.562602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.562643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.563036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.563078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.563382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.563424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.563762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.563804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.563963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.564004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.564287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.564304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.564617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.564658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.565047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.565089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.565392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.565432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.565791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.565833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.566072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.566114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.566442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.115 [2024-07-25 10:44:17.566459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.115 qpair failed and we were unable to recover it. 00:29:14.115 [2024-07-25 10:44:17.566790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.566832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.567123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.567164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.567530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.567571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.567953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.567996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.568357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.568397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.568699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.568750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.569107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.569148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.569505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.569546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.569929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.569970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.570363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.570404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.570784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.570825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.571136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.571177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.571583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.571629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.572015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.572057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.572286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.572303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.572556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.572574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.572849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.572890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.573190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.573243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.573453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.573470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.573674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.573692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.574016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.574034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.574212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.574229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.574563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.574604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.574778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.574820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.575177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.575218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.575587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.575605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.575919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.575961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.576365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.576406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.576680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.576698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.576937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.576955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.577279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.577297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.577638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.577679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.577927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.577968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.116 qpair failed and we were unable to recover it. 00:29:14.116 [2024-07-25 10:44:17.578328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.116 [2024-07-25 10:44:17.578346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.578606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.578623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.578886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.578927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.579285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.579325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.579694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.579712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.580055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.580073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.580359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.580400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.580711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.580762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.581097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.581137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.581517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.581558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.581866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.581908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.582163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.582204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.582531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.582576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.582905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.582923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.583128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.583169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.583552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.583593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.583832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.583874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.584039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.584080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.584330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.584370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.584739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.584757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.585035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.585076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.585393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.585435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.585815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.585833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.586156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.586197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.586488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.586529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.586886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.586928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.587318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.587363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.587482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.587499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.587673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.587713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.587965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.588006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.588173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.588214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.588483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.588500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.588833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.588874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.589268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-07-25 10:44:17.589310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-07-25 10:44:17.589615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.589633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.589872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.589897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.590098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.590116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.590438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.590478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.590855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.590897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.591187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.591228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.591474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.591525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.591835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.591877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.592260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.592301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.592666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.592726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.592962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.593003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.593238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.593279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.593549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.593579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.593833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.593871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.594185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.594226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.594520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.594561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.594852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.594894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.595182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.595223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.595628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.595669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.596024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.596066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.596301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.596320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.596634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.596674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.597001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.597043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.597415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.597457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.597831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.597873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.598197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.598245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.598449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.598466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.598784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.598827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.599117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.599157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.599565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.599605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.599849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.599891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.600257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.600298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.600656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-07-25 10:44:17.600696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-07-25 10:44:17.600945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.600986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.601295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.601335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.601621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.601662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.602074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.602116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.602375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.602392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.602577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.602595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.602849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.602867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.603111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.603129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.603370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.603387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.603703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.603751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.604048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.604089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.604331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.604372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.604750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.604792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.605029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.605070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.605310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.605350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.605575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.605593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.605923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.605964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.606323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.606364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.606711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.606774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.607084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.607130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.607437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.607477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.607712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.607735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.607982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.608000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.608247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.608282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.608641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.608681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.609061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.609102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.609351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.609392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.609609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.609627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.609967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.610009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.610323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.610364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.610650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.610668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.610923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.610941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.611273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.611316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-07-25 10:44:17.611683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-07-25 10:44:17.611733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.612035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.612077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.612401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.612418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.612615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.612633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.612888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.612933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.613228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.613269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.613502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.613546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.613833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.613876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.614097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.614138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.614517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.614558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.614956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.614997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.615333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.615373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.615750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.615791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.616084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.616125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.616521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.616562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.616942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.616983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.617373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.617413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.617794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.617837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.618218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.618259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.618601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.618642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.618904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.618946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.619180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.619221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.619464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.619505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.619864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.619906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.620291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.620332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.620707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.620773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.621076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.621122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.621407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.621425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.621609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.621626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.621871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-07-25 10:44:17.621913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-25 10:44:17.622168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.622208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.622497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.622513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.622752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.622770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.623063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.623104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.623482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.623522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.623821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.623839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.624192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.624233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.624560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.624577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.624832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.624850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.625167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.625185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.625447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.625487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.625738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.625781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.626074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.626115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.626388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.626405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.626675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.626692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.626986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.627028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.627415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.627456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.627688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.627706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.627987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.628036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.628365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.628406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.628764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.628801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.629182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.629222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.629606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.629647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.630036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.630078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.630439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.630481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 4055312 Killed "${NVMF_APP[@]}" "$@" 00:29:14.121 [2024-07-25 10:44:17.630773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.630816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.631071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.631113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.631364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.631382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:14.121 [2024-07-25 10:44:17.631582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.631601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.631860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.631915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:14.121 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:14.121 [2024-07-25 10:44:17.632275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.632317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-25 10:44:17.632553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-07-25 10:44:17.632593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.122 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:14.122 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.122 [2024-07-25 10:44:17.632998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.633040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.633436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.633477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.633834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.633852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.634155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.634196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.634505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.634546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.634755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.634774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.635113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.635154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.635487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.635531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.635789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.635836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.636132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.636172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.636479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.636496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.636750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.636768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.637079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.637121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.637422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.637463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.637763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.637782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.637976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.637995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.638325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.638367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.638690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.638743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.639056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.639097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.639339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.639380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.639670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.639711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.640053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.640095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.640333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.640374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.640607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.640648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.640996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.641038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.641264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.641305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4056133 00:29:14.122 [2024-07-25 10:44:17.641546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.641588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:14.122 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4056133 00:29:14.122 [2024-07-25 10:44:17.641891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.641910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 4056133 ']' 00:29:14.122 [2024-07-25 10:44:17.642215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.642235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-25 10:44:17.642469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-07-25 10:44:17.642509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.122 [2024-07-25 10:44:17.642821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.642863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.123 [2024-07-25 10:44:17.643157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.123 [2024-07-25 10:44:17.643211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.643396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.643415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.643672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.643692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b9 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.123 0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.644008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.644060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.644840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.644868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.645047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.645065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.645407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.645449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.645811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.645853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.646214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.646255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.646612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.646653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.647016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.647033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.647289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.647333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.647489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.647531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.647773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.647814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.648049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.648089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.648381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.648422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.648780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.648822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.649182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.649223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.649449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.649490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.649847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.649895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.650275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.650316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.650624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.650664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.651030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.651071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.651318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.651359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.651578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.651595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.651850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.651868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.652177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.652218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.652453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.652493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.652786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.652803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.653165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.653205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-07-25 10:44:17.653499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-07-25 10:44:17.653539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.653834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.653853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.654114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.654131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.654341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.654358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.654661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.654708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.654963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.655004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.655336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.655377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.655610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.655651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.655944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.655985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.656273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.656313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.656700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.656768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.657161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.657201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.657416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.657456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.657808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.657825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.658157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.658174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.658370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.658411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.658707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.658756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.658942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.658982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.659307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.659347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.659658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.659698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.660035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.660086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.660290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.660330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.660572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.660612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.660893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.660911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.661265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.661306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.661666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.661706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.661932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.661973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.662378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.662420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.662711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.662762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.663145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.663191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.663495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.663536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.663821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.663839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.664121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.664162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.664421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-07-25 10:44:17.664462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-07-25 10:44:17.664822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.664863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.665158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.665200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.665583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.665624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.665957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.665999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.666379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.666419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.666745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.666763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.667033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.667050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.667244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.667261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.667573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.667613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.667965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.668007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.668259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.668300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.668687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.668736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.669130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.669170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.669499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.669540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.669861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.669903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.670205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.670245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.670612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.670652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.670906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.670923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.671117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.671158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.671411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.671451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.671766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.671808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.672115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.672156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.672569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.672609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.672787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.672804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.673041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.673058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.673227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.673244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.673560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.673600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.673921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.673963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-07-25 10:44:17.674263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-07-25 10:44:17.674304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.674661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.674701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.675071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.675111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.675493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.675534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.675842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.675883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.676173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.676214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.676450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.676491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.676797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.676844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.677159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.677199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.677524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.677564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.677870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.677911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.678245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.678286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.678641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.678682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.679050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.679103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.679406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.679447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.679749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.679790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.680037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.680077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.680365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.680406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.680759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.680802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.681060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.681077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.681317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.681334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.681598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.681615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.681921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.681938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.682184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.682200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.682460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.682477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.682816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.682834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.682943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.682960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.683198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.683215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.683468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.683485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.683810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.683827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.684136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.684153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.684267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.684284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.684614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.684631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.684890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.684907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-07-25 10:44:17.685149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-07-25 10:44:17.685166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.685418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.685435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.685614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.685630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.685867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.685884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.686229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.686246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.686427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.686444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.686777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.686794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.686897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.686914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.687198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.687239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.687542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.687582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.687854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.687871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.688150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.688167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.688347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.688364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.688672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.688692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.688882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.688899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.689152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.689169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.689338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.689355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.689604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.689620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.689882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.689923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.690178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.690219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.690610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.690649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.690947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.690988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.691230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.691271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.691633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.691670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.691975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.691993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.692001] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:29:14.127 [2024-07-25 10:44:17.692054] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.127 [2024-07-25 10:44:17.692299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.692320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.692651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.692666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.692851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.692868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.693115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.693132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.693305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.693322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.693499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.693540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.693846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.693887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-07-25 10:44:17.694263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-07-25 10:44:17.694303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.694624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.694665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.694941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.694959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.695205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.695222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.695536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.695576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.695838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.695879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.696253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.696293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.696595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.696614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.696868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.696894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.697174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.697205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.697462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.697485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.697753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.697772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.697963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.697980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.698254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.698271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.698520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.698536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.698721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.698738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.698927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.698944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.699274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.699291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.699529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.699547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.699829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.699847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.700179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.700196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.700497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.700513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.700749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.700766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.700971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.700988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.701246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.701269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.701491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.701521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.701872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.701897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.702233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.702250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.702500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.702517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.702699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.702724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.703048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.703065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.703317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.703334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.703606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.703623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-07-25 10:44:17.703857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-07-25 10:44:17.703878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.704207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.704224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.704458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.704474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.704733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.704750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.705026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.705042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.705369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.705386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.705691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.705708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.706044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.706061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.706310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.706327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.706581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.706598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.706926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.706943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.707201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.707218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.707541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.707557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.707889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.707906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.708103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.708120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.708440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.708457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.708588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.708604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.708783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.708800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.709123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.709140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.709412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.709429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.709754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.709773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.710080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.710096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.710356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.710375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.710644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.710662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.710899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.710916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.711157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.711174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.711451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.711468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.711822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.711840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.712095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.712112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.712274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.712291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.712537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.712554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.712725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.712743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.712997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.713014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-07-25 10:44:17.713248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-07-25 10:44:17.713266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.713503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.713521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.713755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.713772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.714095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.714114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.714417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.714434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.714738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.714755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.715062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.715079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.715401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.715421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.715735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.715753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.716101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.716118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.716419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.716436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.716670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.716687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.717034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.717051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.717232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.717249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.717496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.717514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.717788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.717805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.718070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.718087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.718391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.718407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.718687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.718704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.718981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.718999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.719180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.719197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.719449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.719466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.719788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.719805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.720059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.720076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.720242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.720259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-07-25 10:44:17.720529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-07-25 10:44:17.720546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.720728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.720746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.720994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.721011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.721173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.721190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.721517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.721534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.721804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.721822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.722019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.722036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.722270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.722287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.722482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.722499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.722755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.722775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.723045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.723063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.723230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.723247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.723497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.723514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.723763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.723780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.724063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.724080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.724393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.724410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.724662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.724679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.724890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.724907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.725213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.725230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.725553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.725570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.725773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.725790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.726041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.726057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.726307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.726323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.726641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.726658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.726827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.726844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.727029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.727046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.727296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.727313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.727518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.727535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.727705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.727729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-07-25 10:44:17.727984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-07-25 10:44:17.728001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.728258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.728275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.728370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.728386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.728563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.728580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.728880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.728897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.729153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.729169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.729499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.729515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.729806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.729823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.730059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.730076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.730400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.730416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.132 [2024-07-25 10:44:17.730690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.730708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.730975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.730992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.731195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.731212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.731545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.731562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.731834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.731851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.732040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.732057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.732339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.732356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.732609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.732626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.732936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.732953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.733281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.733298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.733572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.733589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.733765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.733784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.734024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.734041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.734331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.734348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-07-25 10:44:17.734462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-07-25 10:44:17.734479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.734732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.734749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.735079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.735096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.735331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.735348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.735593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.735610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.735921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.735938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.736171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.736188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.736489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.736506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.736764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.736782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.736964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.736983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.737233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.737250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.737411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.737428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.737665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.737682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.737925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.737943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.738053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.738070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.738273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.738289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.738451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.738468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.738700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.738722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.738973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.738990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.739235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.739252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.739579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.739596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.739930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.739948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.740183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.740200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.740433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.740450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-07-25 10:44:17.740791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-07-25 10:44:17.740808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.740982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.740999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.741306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.741323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.741642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.741659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.741974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.741991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.742245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.742262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.742585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.742602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.742784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.742801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.742995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.743012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.743291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.743308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.743583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.743600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.743839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.743857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.744133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.744150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.744397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.744414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.744672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.744689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.744930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.744947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.745252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.745269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.745528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.745544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.745850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.745867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.746115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.746132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.746381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-07-25 10:44:17.746398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-07-25 10:44:17.746633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.746650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.746979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.746997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.747177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.747194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.747466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.747484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.747734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.747754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.747956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.747972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.748214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.748231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.748552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.748570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.748805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.748822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.749071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.749088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.749323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.749341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.749528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.749546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.749777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.749794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.750073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.750090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.750364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.750380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.750708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.750730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.750917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.750934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.751172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.751189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.751392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.751409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.751657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.751673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.751852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.135 [2024-07-25 10:44:17.751869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-07-25 10:44:17.752201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.752218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.752457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.752474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.752711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.752731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.752899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.752916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.753119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.753136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.753464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.753481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.753671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.753688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.753944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.753962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.754245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.754262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.754541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.754558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.754816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.754834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.755029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.755046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.755359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.755376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.755496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.755513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.755751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.755768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.756004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.756021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.756221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.756238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.756494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.756512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.756799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.756816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.757093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.757109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.757313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.757330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.757592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.757609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.757805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.757824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.758006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.758025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.758229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.758246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.758433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.758450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.758695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.758712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.759036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.759054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.759222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.759239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.759543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.759561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-07-25 10:44:17.759818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.136 [2024-07-25 10:44:17.759834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.760161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.760178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.760410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.760427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.760776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.760794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.761061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.761077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.761346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.761363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.761629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.761646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.761881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.761899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.762202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.762219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.762404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.762420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.762669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.762686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.763005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.763022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.763347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.763364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.763690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.763707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.763950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.763967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.764076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.764093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.764342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.764359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.764594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.764611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.764944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.764962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.765267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.765284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.765533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.765550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.765890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.765908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.766236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.766253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.766558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.766575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.766825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.766842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.767141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.767171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.767475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.767505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.767784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.767806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.768062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.768079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.137 [2024-07-25 10:44:17.768329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.137 [2024-07-25 10:44:17.768346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.768587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.768604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.768936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.768954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.769207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.769224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.769550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.769570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.769823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.769841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.770167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.770184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.770478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.770495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.770758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.770776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.771024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.771041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.771237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.771254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.771504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.771521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.771776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.771795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.772098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.772114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.772370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.772388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.772646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.772663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.772990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.773007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.773332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.773349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.773607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.773625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.773825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.773844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.774147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.774164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.774419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.774436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.774695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.774712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.774888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.774905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.775084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.775101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.775427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.775445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.775626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.775643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.775946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.775964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.776315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.776334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.776586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.776605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.776911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.776929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.777237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.777254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.777598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.777615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.138 [2024-07-25 10:44:17.777862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.138 [2024-07-25 10:44:17.777879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.138 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.778202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.778221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.778524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.778541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.778841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.778858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.779100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.779117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.779366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.779383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.779638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.779655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.779961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.779978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.780304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.780321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.780623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.780640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.780958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.780975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.781258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.781278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.781581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.781599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.781901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.781919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.782099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.782116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.782366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.782384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.782570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.782586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.782900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.782917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.783268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.783285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.783602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.783620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.783952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.783969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.784218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.784234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.784539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.784556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.784727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.784744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.785090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.785108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.785393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.785411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.785728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.785745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 [2024-07-25 10:44:17.785744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.785914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.785931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.786186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.786203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.786400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.786417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.786657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.786674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.786931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.786948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.787254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.787272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.787619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.139 [2024-07-25 10:44:17.787636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.139 qpair failed and we were unable to recover it. 00:29:14.139 [2024-07-25 10:44:17.787891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.787909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.788091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.788109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.788433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.788450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.788778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.788796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.789049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.789066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.789301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.789319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.789668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.789685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.790012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.790030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.790359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.790377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.790729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.790746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.791051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.791068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.791389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.791406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.791652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.791670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.791959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.791976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.792286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.792304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.792565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.792582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.792846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.792864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.793099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.793119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.793368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.793386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.793646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.793665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.793839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.793857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.794209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.794227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.794414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.794432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.794649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.794667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.794931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.794950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.795229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.795249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.795516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.795534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.795702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.795725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.795900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.795918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.796154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.796171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.796433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.796449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.796646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.796663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.796837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.796854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.140 qpair failed and we were unable to recover it. 00:29:14.140 [2024-07-25 10:44:17.797183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.140 [2024-07-25 10:44:17.797200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.141 qpair failed and we were unable to recover it. 00:29:14.141 [2024-07-25 10:44:17.797524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.141 [2024-07-25 10:44:17.797541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.141 qpair failed and we were unable to recover it. 00:29:14.141 [2024-07-25 10:44:17.797814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.141 [2024-07-25 10:44:17.797842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.141 qpair failed and we were unable to recover it. 00:29:14.141 [2024-07-25 10:44:17.798118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.141 [2024-07-25 10:44:17.798135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.141 qpair failed and we were unable to recover it. 00:29:14.141 [2024-07-25 10:44:17.798437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.141 [2024-07-25 10:44:17.798454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.141 qpair failed and we were unable to recover it. 00:29:14.416 [2024-07-25 10:44:17.798688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.416 [2024-07-25 10:44:17.798706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.416 qpair failed and we were unable to recover it. 00:29:14.416 [2024-07-25 10:44:17.798996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.416 [2024-07-25 10:44:17.799015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.416 qpair failed and we were unable to recover it. 00:29:14.416 [2024-07-25 10:44:17.799319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.416 [2024-07-25 10:44:17.799336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.416 qpair failed and we were unable to recover it. 00:29:14.416 [2024-07-25 10:44:17.799575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.416 [2024-07-25 10:44:17.799592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.416 qpair failed and we were unable to recover it. 00:29:14.416 [2024-07-25 10:44:17.799896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.416 [2024-07-25 10:44:17.799913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.416 qpair failed and we were unable to recover it. 00:29:14.416 [2024-07-25 10:44:17.800232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.416 [2024-07-25 10:44:17.800248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.416 qpair failed and we were unable to recover it. 00:29:14.416 [2024-07-25 10:44:17.800555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.416 [2024-07-25 10:44:17.800573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.416 qpair failed and we were unable to recover it. 00:29:14.416 [2024-07-25 10:44:17.800831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.416 [2024-07-25 10:44:17.800849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.416 qpair failed and we were unable to recover it. 00:29:14.416 [2024-07-25 10:44:17.801153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.416 [2024-07-25 10:44:17.801170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.416 qpair failed and we were unable to recover it. 00:29:14.416 [2024-07-25 10:44:17.801473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.416 [2024-07-25 10:44:17.801491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.416 qpair failed and we were unable to recover it. 00:29:14.416 [2024-07-25 10:44:17.801741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.416 [2024-07-25 10:44:17.801759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.416 qpair failed and we were unable to recover it. 00:29:14.416 [2024-07-25 10:44:17.802033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.416 [2024-07-25 10:44:17.802051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.416 qpair failed and we were unable to recover it. 00:29:14.416 [2024-07-25 10:44:17.802354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.416 [2024-07-25 10:44:17.802371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.802624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.802641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.802815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.802833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.803170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.803187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.803518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.803535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.803860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.803878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.804133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.804151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.804476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.804496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.804804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.804821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.805149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.805166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.805366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.805383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.805570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.805587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.805756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.805773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.806100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.806117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.806310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.806327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.806632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.806649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.806980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.806997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.807302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.807318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.807589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.807605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.807931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.807949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.808145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.808162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.808471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.808488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.808817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.808835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.809163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.809179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.809495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.809512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.809819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.809836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.810107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.810123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.810395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.810412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.810666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.810683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.810955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.810972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.811155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.417 [2024-07-25 10:44:17.811172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.417 qpair failed and we were unable to recover it. 00:29:14.417 [2024-07-25 10:44:17.811415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.811432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.811684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.811701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.812010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.812026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.812131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.812148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.812400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.812417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.812728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.812746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.813074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.813090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.813402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.813419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.813616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.813633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.813818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.813836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.814105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.814122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.814369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.814386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.814637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.814653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.814891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.814909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.815161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.815179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.815358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.815374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.815606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.815625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.815997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.816014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.816345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.816362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.816655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.816672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.817002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.817020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.817322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.817339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.817604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.817621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.817947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.817964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.818242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.818260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.818466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.818483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.818736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.818753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.819013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.819030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.819282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.819300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.819562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.819580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.418 qpair failed and we were unable to recover it. 00:29:14.418 [2024-07-25 10:44:17.819762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.418 [2024-07-25 10:44:17.819782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.820043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.820063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.820390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.820410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.820666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.820684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.820907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.820927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.821260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.821279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.821458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.821476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.821732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.821750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.821946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.821966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.822213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.822231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.822420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.822439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.822774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.822794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.823058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.823076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.823282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.823301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.823482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.823499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.823785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.823803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.824041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.824058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.824242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.824260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.824454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.824472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.824713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.824735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.825061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.825078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.825412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.825430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.825682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.825701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.825883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.825900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.826227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.826244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.826479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.826496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.826835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.826858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.827061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.827078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.827328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.827346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.827582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.419 [2024-07-25 10:44:17.827600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.419 qpair failed and we were unable to recover it. 00:29:14.419 [2024-07-25 10:44:17.827926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.827945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.828254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.828272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.828471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.828490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.828781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.828800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.829128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.829146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.829381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.829398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.829725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.829743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.830068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.830085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.830411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.830427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.830674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.830691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.831052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.831070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.831431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.831448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.831694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.831711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.832043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.832061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.832254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.832270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.832573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.832590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.832772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.832789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.833067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.833084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.833438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.833454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.833785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.833803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.834109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.834126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.834434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.834451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.834820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.834837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.835157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.835174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.835477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.835494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.835751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.835768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.836026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.836043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.836301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.836317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.836653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.836670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.836917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.836935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.837256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.420 [2024-07-25 10:44:17.837273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.420 qpair failed and we were unable to recover it. 00:29:14.420 [2024-07-25 10:44:17.837526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.837543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.837895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.837912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.838221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.838238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.838547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.838563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.838818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.838835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.839075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.839094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.839326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.839343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.839675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.839692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.840009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.840026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.840302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.840319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.840632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.840649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.840961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.840979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.841282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.841299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.841552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.841569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.841872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.841889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.842214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.842231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.842578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.842594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.842931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.842948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.843206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.843223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.843511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.843527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.843847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.843864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.844188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.844205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.844562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.844578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.844840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.844857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.845184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.845211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.845447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.845465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.845764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.845781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.845951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.845968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.846296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.846313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.846576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.846593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.421 qpair failed and we were unable to recover it. 00:29:14.421 [2024-07-25 10:44:17.846934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.421 [2024-07-25 10:44:17.846951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.847262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.847279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.847534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.847551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.847823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.847840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.848107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.848124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.848468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.848486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.848789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.848806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.848986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.849003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.849249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.849266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.849579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.849596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.849903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.849920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.850245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.850261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.850586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.850603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.850920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.850937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.851191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.851208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.851528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.851547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.851805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.851823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.852146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.852163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.852511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.852530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.852837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.852856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.853186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.853203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.853476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.853492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.853832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.853849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.854198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.854214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.854472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.854489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.422 [2024-07-25 10:44:17.854760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.422 [2024-07-25 10:44:17.854777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.422 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.855107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.855125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.855472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.855491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.855820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.855838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.856116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.856134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.856460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.856477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.856625] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.423 [2024-07-25 10:44:17.856661] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.423 [2024-07-25 10:44:17.856671] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.423 [2024-07-25 10:44:17.856680] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.423 [2024-07-25 10:44:17.856688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.423 [2024-07-25 10:44:17.856726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.856743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.857017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.857033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.857106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:14.423 [2024-07-25 10:44:17.857198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:14.423 [2024-07-25 10:44:17.857362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.857379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.857306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:14.423 [2024-07-25 10:44:17.857304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:14.423 [2024-07-25 10:44:17.857651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.857667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.857902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.857918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.858171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.858188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.858492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.858509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.858826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.858844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.859154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.859172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.859408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.859425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.859751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.859768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.860082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.860099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.860370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.860387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.860722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.860739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.860974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.860991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.861342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.861358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.861610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.861627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.861950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.861967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.862220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.862237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.862561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.862578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.862825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.862842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.423 [2024-07-25 10:44:17.863184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.423 [2024-07-25 10:44:17.863203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.423 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.863453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.863471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.863797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.863815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.864170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.864187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.864490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.864507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.864833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.864851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.865202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.865219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.865548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.865566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.865914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.865932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.866263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.866281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.866584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.866601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.866919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.866937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.867210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.867228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.867467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.867484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.867819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.867838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.868128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.868145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.868527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.868545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.868853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.868870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.869201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.869219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.869574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.869593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.869920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.869939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.870291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.870310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.870637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.870655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.871001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.871019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.871351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.871369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.871729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.871749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.872020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.872038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.872353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.872370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.872725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.872743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.873074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.873093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.424 [2024-07-25 10:44:17.873445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.424 [2024-07-25 10:44:17.873463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.424 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.873791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.873808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.874161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.874180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.874485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.874503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.874831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.874853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.875187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.875205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.875533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.875552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.875904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.875922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.876258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.876277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.876507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.876526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.876829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.876850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.877155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.877173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.877497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.877515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.877784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.877804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.878110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.878129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.878309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.878326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.878576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.878594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.878855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.878873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.879112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.879131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.879436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.879453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.879781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.879798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.880148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.880167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.880348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.880365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.880695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.880718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.881024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.881042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.881336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.881354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.881604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.881622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.881929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.881947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.882214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.882231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.882517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.882534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.882748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.882765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.425 qpair failed and we were unable to recover it. 00:29:14.425 [2024-07-25 10:44:17.883085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.425 [2024-07-25 10:44:17.883102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.883427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.883444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.883793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.883811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.884055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.884072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.884345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.884362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.884700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.884721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.884982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.885000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.885202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.885219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.885453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.885469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.885709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.885733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.885974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.885992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.886245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.886262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.886597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.886615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.886785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.886803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.887119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.887139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.887424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.887443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.887677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.887694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.887902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.887920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.888121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.888139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.888375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.888395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.888656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.888674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.888931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.888949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.889206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.889225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.889459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.889477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.889710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.889732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.890057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.890075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.890385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.890403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.890762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.890780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.890973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.890990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.891194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.891211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.426 [2024-07-25 10:44:17.891528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.426 [2024-07-25 10:44:17.891547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.426 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.891810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.891829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.892155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.892173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.892521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.892539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.892885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.892904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.893134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.893151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.893350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.893367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.893635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.893655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.893908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.893927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.894233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.894251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.894505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.894522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.894776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.894794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.895007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.895025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.895267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.895285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.895633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.895650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.895932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.895949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.896191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.896208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.896574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.896591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.896895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.896913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.897196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.897213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.897428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.897445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.897771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.897789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.898101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.898119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.898445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.898463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.898742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.898762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.899088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.899106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.899292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.899309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.427 qpair failed and we were unable to recover it. 00:29:14.427 [2024-07-25 10:44:17.899601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.427 [2024-07-25 10:44:17.899619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.899855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.899873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.900154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.900176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.900380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.900398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.900581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.900598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.900857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.900875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.901082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.901099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.901351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.901368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.901719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.901736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.902098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.902117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.902383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.902400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.902735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.902754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.903010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.903027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.903228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.903246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.903495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.903513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.903842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.903861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.904217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.904236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.904505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.904523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.904850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.904868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.905076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.905094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.905368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.905385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.905710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.905734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.906084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.906103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.906340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.906359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.906689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.906706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.906941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.906959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.907238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.907255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.907604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.907621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.907877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.907895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.428 [2024-07-25 10:44:17.908204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.428 [2024-07-25 10:44:17.908222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.428 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.908431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.908448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.908795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.908814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.909021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.909038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.909344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.909362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.909630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.909647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.909923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.909941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.910246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.910263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.910522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.910540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.910827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.910846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.911098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.911115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.911368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.911385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.911710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.911732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.912081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.912103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.912355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.912373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.912691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.912708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.912972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.912990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.913200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.913217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.913560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.913577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.913928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.913946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.914230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.914247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.914520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.914537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.914863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.914881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.915134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.915151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.915399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.915416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.915723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.915741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.916018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.916035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.916294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.916311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.916649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.916666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.916976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.916993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.429 [2024-07-25 10:44:17.917298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.429 [2024-07-25 10:44:17.917316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.429 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.917647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.917663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.918012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.918030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.918313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.918330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.918666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.918683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.918872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.918889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.919216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.919233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.919552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.919570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.919821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.919839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.920068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.920086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.920282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.920299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.920513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.920530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.920808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.920825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.921119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.921136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.921329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.921346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.921603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.921620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.921937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.921955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.922118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.922135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.922438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.922455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.922662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.922678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.922933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.922951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.923255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.923272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.923643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.923660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.923980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.923999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.924274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.924291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.924520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.924537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.924710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.924730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.924977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.924994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.925252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.925268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.925564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.925580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.925826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.925843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.430 [2024-07-25 10:44:17.926179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-07-25 10:44:17.926197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.430 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.926527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.926544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.926809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.926827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.927153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.927170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.927379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.927396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.927696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.927713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.928006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.928023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.928295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.928312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.928564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.928581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.928928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.928946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.929200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.929217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.929462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.929479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.929806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.929824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.930096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.930112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.930442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.930459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.930751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.930768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.930971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.930988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.931261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.931278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.931517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.931534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.931870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.931889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.932163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.932180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.932390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.932407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.932576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.932593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.932876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.932893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.933100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.933116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.933438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.933455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.933772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.933790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.934046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.934062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.934321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.934338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.934590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.934607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.934844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-07-25 10:44:17.934861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.431 qpair failed and we were unable to recover it. 00:29:14.431 [2024-07-25 10:44:17.935185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.935203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.935403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.935424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.935673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.935690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.935967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.935984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.936240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.936257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.936597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.936615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.936924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.936942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.937153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.937170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.937516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.937533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.937771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.937788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.938140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.938157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.938497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.938513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.938847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.938864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.939211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.939228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.939482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.939499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.939810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.939828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.940175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.940192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.940499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.940516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.940836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.940853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.941050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.941067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.941325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.941342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.941636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.941652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.941976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.941994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.942161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.942178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.942505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.942522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.942825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.942844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.943096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.943113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.943439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.943456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.943656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.943674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.943892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.943910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.432 [2024-07-25 10:44:17.944161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-07-25 10:44:17.944178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.432 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.944424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.944442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.944752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.944770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.944989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.945006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.945190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.945207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.945478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.945496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.945851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.945868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.946172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.946190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.946490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.946506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.946766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.946783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.947106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.947125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.947486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.947505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.947830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.947847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.948172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.948190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.948442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.948459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.948696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.948717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.948977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.948995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.949263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.949280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.949631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.949648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.949914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.949932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.950259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.950275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.950597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.950614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.950966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.950983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.951218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.951234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.951581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.951598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.951838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.951855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.952157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.433 [2024-07-25 10:44:17.952174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.433 qpair failed and we were unable to recover it. 00:29:14.433 [2024-07-25 10:44:17.952411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.952428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.952754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.952771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.953119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.953137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.953389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.953406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.953738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.953756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.954058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.954075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.954330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.954347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.954619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.954635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.954965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.954982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.955331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.955348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.955674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.955692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.956008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.956025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.956277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.956294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.956631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.956649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.956952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.956970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.957150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.957168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.957474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.957491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.957790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.957808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.958136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.958152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.958354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.958371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.958701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.958720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.959046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.959063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.959245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.959263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.959609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.959626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.959905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.959925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.960266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.960283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.960624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.960641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.960934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.960951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.961278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.961295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.961618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.961635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.961986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.434 [2024-07-25 10:44:17.962003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.434 qpair failed and we were unable to recover it. 00:29:14.434 [2024-07-25 10:44:17.962331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.962347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.962619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.962636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.962885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.962902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.963177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.963194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.963502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.963519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.963787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.963804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.964156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.964173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.964430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.964447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.964775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.964792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.965107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.965123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.965398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.965415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.965730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.965747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.965935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.965952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.966187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.966205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.966452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.966469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.966796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.966813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.967153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.967170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.967466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.967483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.967831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.967849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.968103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.968120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.968430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.968472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.968809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.968826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.969121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.969134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.969446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.969459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.969625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.969638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.969927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.969940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.970230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.970242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.970490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.970503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.970824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.970837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.971113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.435 [2024-07-25 10:44:17.971125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.435 qpair failed and we were unable to recover it. 00:29:14.435 [2024-07-25 10:44:17.971329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.971342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.971656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.971668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.971915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.971928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.972177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.972193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.972419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.972432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.972756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.972769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.973052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.973064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.973377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.973389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.973698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.973711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.974036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.974049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.974241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.974253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.974595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.974608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.974894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.974906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.975184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.975196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.975436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.975448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.975709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.975726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.975900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.975912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.976113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.976125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.976379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.976391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.976699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.976711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.976967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.976979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.977272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.977284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.977624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.977636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.977922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.977935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.978227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.978239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.978546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.978558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.978819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.978832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.979032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.979044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.979350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.979362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.979677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.979689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.980073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.436 [2024-07-25 10:44:17.980086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.436 qpair failed and we were unable to recover it. 00:29:14.436 [2024-07-25 10:44:17.980332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.980345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.980657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.980670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.980969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.980981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.981173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.981186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.981552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.981564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.981796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.981808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.982049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.982061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.982286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.982299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.982631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.982644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.982898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.982911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.983083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.983095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.983361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.983373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.983618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.983633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.983814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.983826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.984065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.984077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.984263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.984275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.984646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.984658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.984954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.984967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.985218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.985230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.985592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.985605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.985851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.985863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.986200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.986212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.986404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.986417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.986716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.986729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.986989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.987002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.987189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.987201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.987440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.987452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.987774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.987787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.987981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.987993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.988237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.988249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.988566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.437 [2024-07-25 10:44:17.988578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.437 qpair failed and we were unable to recover it. 00:29:14.437 [2024-07-25 10:44:17.988894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.988907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.989083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.989096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.989329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.989341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.989674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.989686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.989860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.989873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.990099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.990111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.990355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.990368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.990661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.990673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.990992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.991005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.991319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.991331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.991646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.991658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.991976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.991989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.992175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.992188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.992445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.992457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.992784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.992797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.993051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.993063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.993254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.993266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.993533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.993545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.993874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.993887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.994198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.994212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.994452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.994465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.994730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.994745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.994987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.995000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.995299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.995311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.995552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.995564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.995857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.995869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.996132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.996144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.996458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.996471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.996692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.996705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.438 [2024-07-25 10:44:17.996892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.438 [2024-07-25 10:44:17.996905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.438 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:17.997147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:17.997159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:17.997488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:17.997500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:17.997754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:17.997766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:17.998012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:17.998024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:17.998212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:17.998224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:17.998549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:17.998563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:17.998879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:17.998892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:17.999106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:17.999118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:17.999364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:17.999377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:17.999660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:17.999672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:17.999932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:17.999945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:18.000230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:18.000242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:18.000550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:18.000562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:18.000812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:18.000824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:18.001021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:18.001034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:18.001278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:18.001291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:18.001525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:18.001537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:18.001717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:18.001730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:18.001962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:18.002000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:18.002216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:18.002235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:18.002544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:18.002562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:18.002810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:18.002828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:18.003030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:18.003047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:18.003330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:18.003347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:18.003671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.439 [2024-07-25 10:44:18.003688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.439 qpair failed and we were unable to recover it. 00:29:14.439 [2024-07-25 10:44:18.003953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.003970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.004228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.004244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.004545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.004562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.004766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.004783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.005012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.005029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.005281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.005299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.005548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.005569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.005887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.005905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.006209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.006226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.006543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.006560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.006885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.006902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.007109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.007126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.007329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.007346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.007556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.007573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.007921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.007939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.008136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.008152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.008430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.008448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.008725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.008742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.009047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.009063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.009368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.009384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.009622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.009639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.009950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.009968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.010292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.010308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.010663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.010681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.010986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.011003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.011259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.011276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.011620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.011637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.011939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.011956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.012273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.012290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.012594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.012611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.440 [2024-07-25 10:44:18.012927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.440 [2024-07-25 10:44:18.012945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.440 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.013230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.013246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.013568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.013584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.013856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.013875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.014132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.014148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.014407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.014424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.014672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.014689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.015026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.015043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.015372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.015389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.015738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.015756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.016025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.016042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.016352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.016368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.016692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.016708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.017064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.017081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.017437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.017454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.017789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.017806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.018058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.018074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.018392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.018409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.018703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.018722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.018982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.018999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.019303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.019320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.019507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.019523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.019787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.019805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.020060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.020078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.020317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.020334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.020651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.020668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.020962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.020980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.021305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.021322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.021598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.021615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.021924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.021941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.022251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.022268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.441 qpair failed and we were unable to recover it. 00:29:14.441 [2024-07-25 10:44:18.022645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.441 [2024-07-25 10:44:18.022662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.022861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.022878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.023187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.023204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.023459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.023476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.023827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.023844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.024129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.024145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.024399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.024415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.024723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.024742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.024998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.025015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.025299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.025317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.025663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.025679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.025961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.025978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.026258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.026277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.026605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.026621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.026937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.026955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.027260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.027277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.027602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.027619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.027873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.027890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.028165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.028182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.028388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.028405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.028659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.028676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.028940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.028957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.029190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.029206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.029446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.029464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.029792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.029809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.030160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.030177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.030505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.030521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.030871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.030888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.031192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.031209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.031548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.442 [2024-07-25 10:44:18.031565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.442 qpair failed and we were unable to recover it. 00:29:14.442 [2024-07-25 10:44:18.031827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.031844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.032119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.032136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.032440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.032457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.032787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.032804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.033153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.033170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.033497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.033514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.033864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.033882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.034189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.034206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.034406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.034422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.034682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.034700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.034974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.034991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.035343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.035359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.035600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.035617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.035893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.035910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.036191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.036209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.036409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.036426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.036734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.036751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.037052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.037069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.037421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.037438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.037765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.037782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.038087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.038104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.038302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.038318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.038554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.038573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.038895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.038912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.039194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.039211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.039542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.039560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.039930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.039947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.040265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.040281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.040547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.040564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.040820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.443 [2024-07-25 10:44:18.040837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.443 qpair failed and we were unable to recover it. 00:29:14.443 [2024-07-25 10:44:18.041114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.041132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.041387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.041404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.041724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.041742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.042087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.042104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.042409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.042426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.042752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.042769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.043020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.043038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.043308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.043325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.043652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.043669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.044019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.044037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.044282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.044299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.044666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.044683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.044979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.044996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.045258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.045276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.045593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.045609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.045869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.045886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.046155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.046172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.046374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.046390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.046712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.046733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.046938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.046955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.047274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.047291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.047617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.047634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.047980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.047998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.048254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.048271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.048584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.048601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.048905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.048922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.049168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.049185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.049429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.049446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.049748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.049765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.444 qpair failed and we were unable to recover it. 00:29:14.444 [2024-07-25 10:44:18.050025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.444 [2024-07-25 10:44:18.050042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.050254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.050272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.050605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.050622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.050952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.050972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.051297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.051314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.051578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.051595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.051925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.051943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.052298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.052316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.052653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.052670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.052975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.052992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.053317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.053333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.053603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.053620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.053875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.053893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.054147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.054164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.054415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.054433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.054742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.054759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.054994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.055011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.055281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.055298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.055570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.055587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.055940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.055957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.056206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.056223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.056561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.056578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.056834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.056851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.057161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.057179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.057454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.057471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.057720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.057738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.445 [2024-07-25 10:44:18.058041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.445 [2024-07-25 10:44:18.058058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.445 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.058259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.058276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.058522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.058539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.058867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.058885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.059166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.059182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.059498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.059516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.059800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.059818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.060152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.060169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.060495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.060512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.060869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.060886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.061140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.061157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.061400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.061416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.061709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.061732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.062062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.062079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.062378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.062395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.062722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.062740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.062942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.062958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.063238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.063257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.063605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.063623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.063871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.063889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.064151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.064168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.064370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.064387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.064641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.064657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.064964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.064981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.065284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.065301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.065498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.065515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.065825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.065843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.066165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.066182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.066414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.066431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.066685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.066703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.446 [2024-07-25 10:44:18.067039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.446 [2024-07-25 10:44:18.067056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.446 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.067313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.067330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.067662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.067679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.067961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.067978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.068283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.068300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.068670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.068688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.068940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.068957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.069220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.069237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.069577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.069594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.069932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.069949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.070274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.070291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.070546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.070563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.070891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.070909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.071105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.071122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.071378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.071396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.071683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.071700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.071993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.072009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.072314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.072330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.072656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.072673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.072992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.073010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.073280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.073297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.073672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.073689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.073939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.073956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.074212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.074229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.074548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.074565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.074926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.074943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.075269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.075286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.075608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.075628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.075923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.075940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.076117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.076134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.447 [2024-07-25 10:44:18.076323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.447 [2024-07-25 10:44:18.076340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.447 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.076593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.076609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.076854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.076871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.077072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.077088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.077334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.077351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.077598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.077615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.077928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.077946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.078205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.078222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.078510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.078527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.078769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.078786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.079112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.079129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.079326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.079343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.079541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.079558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.079809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.079826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.080131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.080148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.080487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.080504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.080840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.080857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.081167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.081183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.081490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.081507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.081760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.081777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.081994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.082011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.082204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.082222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.082461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.082479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.082807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.082825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.083068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.083084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.083433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.083450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.083751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.083769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.084032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.084049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.084335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.084351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.084673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.448 [2024-07-25 10:44:18.084689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.448 qpair failed and we were unable to recover it. 00:29:14.448 [2024-07-25 10:44:18.085008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.085027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.085330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.085347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.085526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.085543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.085881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.085898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.086135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.086152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.086474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.086491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.086803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.086820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.087073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.087093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.087331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.087349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.087650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.087667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.087935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.087953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.088156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.088173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.088523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.088540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.088812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.088829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.089136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.089153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.089408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.089425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.089693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.089710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.089906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.089923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.090133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.090150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.090471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.090488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.090857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.090874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.091088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.091104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.091349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.091367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.091695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.091712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.091970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.091987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.092323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.092340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.449 [2024-07-25 10:44:18.092688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.449 [2024-07-25 10:44:18.092704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.449 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.092989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.093006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.093255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.093272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.093631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.093648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.093907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.093924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.094230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.094247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.094523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.094540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.094885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.094902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.095108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.095125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.095379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.095396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.095672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.095689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.096008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.096025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.096212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.096230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.096516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.096533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.096789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.096806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.097106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.097122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.097313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.097329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.097668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.097685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.097959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.097977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.098261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.098279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.098561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.098578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.098835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.098854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.099138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.099155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.099385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.099402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.450 qpair failed and we were unable to recover it. 00:29:14.450 [2024-07-25 10:44:18.099696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.450 [2024-07-25 10:44:18.099718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.451 [2024-07-25 10:44:18.099976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.451 [2024-07-25 10:44:18.099993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.451 [2024-07-25 10:44:18.100197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.451 [2024-07-25 10:44:18.100214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.451 [2024-07-25 10:44:18.100563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.451 [2024-07-25 10:44:18.100581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.451 [2024-07-25 10:44:18.100895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.451 [2024-07-25 10:44:18.100913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.451 [2024-07-25 10:44:18.101216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.451 [2024-07-25 10:44:18.101233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.451 [2024-07-25 10:44:18.101488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.451 [2024-07-25 10:44:18.101505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.451 [2024-07-25 10:44:18.101765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.451 [2024-07-25 10:44:18.101782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.451 [2024-07-25 10:44:18.102033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.451 [2024-07-25 10:44:18.102050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.451 [2024-07-25 10:44:18.102397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.451 [2024-07-25 10:44:18.102413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.451 [2024-07-25 10:44:18.102615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.451 [2024-07-25 10:44:18.102632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.451 [2024-07-25 10:44:18.102951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.451 [2024-07-25 10:44:18.102968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.451 [2024-07-25 10:44:18.103227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.451 [2024-07-25 10:44:18.103245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.451 [2024-07-25 10:44:18.103611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.451 [2024-07-25 10:44:18.103627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.451 [2024-07-25 10:44:18.103891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.451 [2024-07-25 10:44:18.103908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.451 [2024-07-25 10:44:18.104183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.451 [2024-07-25 10:44:18.104200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.451 qpair failed and we were unable to recover it. 00:29:14.729 [2024-07-25 10:44:18.104474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.729 [2024-07-25 10:44:18.104491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.729 qpair failed and we were unable to recover it. 00:29:14.729 [2024-07-25 10:44:18.104771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.729 [2024-07-25 10:44:18.104789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.729 qpair failed and we were unable to recover it. 00:29:14.729 [2024-07-25 10:44:18.105072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.729 [2024-07-25 10:44:18.105089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.729 qpair failed and we were unable to recover it. 00:29:14.729 [2024-07-25 10:44:18.105359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.729 [2024-07-25 10:44:18.105376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.729 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.105629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.105646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.105950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.105967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.106276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.106293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.106643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.106660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae00000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.106778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1beb210 is same with the state(5) to be set 00:29:14.730 [2024-07-25 10:44:18.107077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.107107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.107409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.107423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.107726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.107740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.107989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.108002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.108319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.108332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.108647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.108659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.108974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.108986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.109226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.109239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.109517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.109529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.109746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.109759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.110079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.110092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.110391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.110404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.110648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.110661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.110986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.110999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.111243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.111255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.111522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.111534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.111757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.111770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.112063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.112075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.112366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.112379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.112608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.112620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.112934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.112947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.113208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.113220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.113501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.113513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.113828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.113841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.114159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.114171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.114415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.114427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.114652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.114668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.114987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.114999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.730 [2024-07-25 10:44:18.115191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.730 [2024-07-25 10:44:18.115203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.730 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.115467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.115480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.115775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.115787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.116020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.116032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.116280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.116292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.116473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.116485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.116781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.116794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.116975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.116988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.117236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.117248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.117577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.117590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.117924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.117937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.118162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.118174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.118518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.118530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.118770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.118782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.119077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.119089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.119329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.119342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.119595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.119607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.119869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.119882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.120183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.120195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.120426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.120438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.120664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.120677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.120928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.120940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.121116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.121129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.121422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.121435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.121660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.121672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.121901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.121914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.122256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.122268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.122603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.122615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.122904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.122917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.123117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.123129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.123372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.123384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.123568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.123580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.123860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.123873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.124119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.731 [2024-07-25 10:44:18.124132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.731 qpair failed and we were unable to recover it. 00:29:14.731 [2024-07-25 10:44:18.124470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.124482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.124655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.124667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.124992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.125004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.125319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.125331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.125649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.125663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.125926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.125939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.126274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.126287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.126546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.126558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.126740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.126753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.126996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.127008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.127280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.127292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.127623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.127635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.127893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.127906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.128222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.128234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.128438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.128451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.128743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.128756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.129045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.129058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.129396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.129408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.129655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.129667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.129931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.129943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.130198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.130210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.130524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.130536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.130805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.130819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.131135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.131147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.131446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.131458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.131737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.131749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.131952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.131964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.132228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.132241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.132526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.132538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.132852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.132865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.133100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.133113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.133372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.133385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.133700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.133713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.732 [2024-07-25 10:44:18.133973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.732 [2024-07-25 10:44:18.133986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.732 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.134252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.134264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.134676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.134689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.134957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.134969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.135225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.135237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.135517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.135529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.135860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.135873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.136189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.136201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.136468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.136480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.136779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.136791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.137017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.137029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.137324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.137338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.137679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.137691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.138031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.138045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.138289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.138301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.138532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.138544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.138808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.138821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.139113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.139125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.139436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.139448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.139775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.139787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.140104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.140116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.140364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.140376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.140686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.140698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.140899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.140911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.141232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.141244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.141515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.141527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.141831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.141844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.142162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.142174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.142498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.142510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.142749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.142762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.143048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.143061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.143355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.143367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.143631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.143643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.143963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.733 [2024-07-25 10:44:18.143975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.733 qpair failed and we were unable to recover it. 00:29:14.733 [2024-07-25 10:44:18.144176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.144188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.144428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.144440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.144614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.144626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.144853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.144866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.145207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.145219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.145471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.145484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.145807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.145820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.146114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.146127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.146491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.146503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.146816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.146828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.147071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.147084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.147345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.147357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.147587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.147600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.147947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.147959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.148140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.148153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.148468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.148480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.148748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.148761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.149020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.149034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.149348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.149360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.149666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.149680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.149958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.149970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.150191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.150203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.150527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.150539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.150883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.150896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.151066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.151078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.151269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.151281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.151534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.151546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.151816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.151829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.152057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.152069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.152315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.152327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.152653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.152665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.734 qpair failed and we were unable to recover it. 00:29:14.734 [2024-07-25 10:44:18.152958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.734 [2024-07-25 10:44:18.152971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.153194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.153206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.153540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.153553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.153861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.153874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.154064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.154076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.154369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.154382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.154704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.154719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.154907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.154920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.155122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.155134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.155426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.155438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.155745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.155757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.155949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.155962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.156187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.156199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.156445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.156458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.156696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.156708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.156969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.156981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.157176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.157189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.157542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.157554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.157875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.157888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.158087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.158099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.158344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.158356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.158598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.158610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.158851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.158863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.159133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.159146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.159444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.159456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.159764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.159776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.160022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.160037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.160238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.160250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.160493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.160506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.735 qpair failed and we were unable to recover it. 00:29:14.735 [2024-07-25 10:44:18.160738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.735 [2024-07-25 10:44:18.160751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.160923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.160935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.161159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.161172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.161350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.161363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.161667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.161680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.161852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.161865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.162193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.162205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.162405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.162417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.162595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.162607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.162836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.162848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.163145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.163157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.163451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.163465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.163701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.163717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.163994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.164006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.164233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.164245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.164531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.164543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.164811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.164824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.165146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.165158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.165430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.165442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.165770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.165784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.166034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.166046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.166289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.166301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.166574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.166586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.166760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.166772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.167088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.167100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.167328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.167341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.167581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.167593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.167887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.167900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.168156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.168168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.168418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.168431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.168734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.168747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.169063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.169075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.169302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.169315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.169667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.736 [2024-07-25 10:44:18.169679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.736 qpair failed and we were unable to recover it. 00:29:14.736 [2024-07-25 10:44:18.169928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.169941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.170252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.170264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.170536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.170548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.170823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.170838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.171085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.171097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.171321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.171333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.171672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.171684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.171932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.171945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.172268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.172281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.172599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.172611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.172920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.172933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.173125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.173138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.173462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.173474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.173767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.173780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.174072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.174085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.174401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.174413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.174719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.174732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.175051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.175064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.175308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.175320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.175578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.175590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.175945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.175958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.176235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.176248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.176508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.176520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.176835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.176847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.177096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.177108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.177337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.177349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.177595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.177607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.177783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.177796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.178058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.178071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.178342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.178355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.178603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.178615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.178924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.178937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.179193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.179206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.737 [2024-07-25 10:44:18.179463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.737 [2024-07-25 10:44:18.179475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.737 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.179717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.179730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.179922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.179934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.180186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.180198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.180437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.180449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.180699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.180712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.180983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.180995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.181185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.181197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.181445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.181457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.181641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.181654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.181971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.181986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.182292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.182304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.182535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.182547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.182869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.182882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.183180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.183192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.183389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.183401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.183640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.183653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.183901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.183913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.184252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.184265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.184524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.184536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.184783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.184795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.185024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.185037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.185273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.185285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.185578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.185590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.185885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.185898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.186092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.186105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.186290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.186302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.186643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.186655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.186938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.186952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.187211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.187224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.187401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.187413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.187727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.187740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.188052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.188064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.738 [2024-07-25 10:44:18.188374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.738 [2024-07-25 10:44:18.188387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.738 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.188626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.188638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.188905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.188917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.189114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.189127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.189499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.189512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.189866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.189878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.190070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.190082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.190385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.190397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.190653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.190665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.190958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.190971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.191196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.191208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.191403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.191416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.191678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.191691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.191872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.191885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.192129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.192142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.192399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.192411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.192703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.192718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.193041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.193056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.193324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.193336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.193649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.193662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.193949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.193961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.194204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.194216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.194526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.194539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.194783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.194796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.195118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.195130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.195377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.195389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.195649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.195661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.195972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.195985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.196183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.196195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.196420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.196432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.196725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.196738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.196992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.197005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.197243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.197256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.197535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.739 [2024-07-25 10:44:18.197547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.739 qpair failed and we were unable to recover it. 00:29:14.739 [2024-07-25 10:44:18.197860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.197872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.198190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.198203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.198412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.198424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.198702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.198716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.199021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.199035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.199288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.199301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.199478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.199490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.199667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.199679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.199975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.199988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.200214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.200226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.200513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.200525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.200809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.200822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.201068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.201081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.201270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.201283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.201540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.201553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.201891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.201904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.202089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.202101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.202442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.202454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.202758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.202770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.202949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.202961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.203185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.203197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.203370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.203383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.203739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.203752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.204060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.204074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.204365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.204377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.204703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.204718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.205028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.205040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.205333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.205345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.205536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.205548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.740 [2024-07-25 10:44:18.205775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.740 [2024-07-25 10:44:18.205787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.740 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.206081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.206094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.206361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.206373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.206610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.206622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.206952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.206965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.207257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.207270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.207601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.207614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.207773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.207786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.208053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.208065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.208323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.208336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.208581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.208593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.208886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.208898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.209080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.209093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.209336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.209348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.209597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.209609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.209792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.209804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.210071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.210083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.210259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.210271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.210603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.210616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.210799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.210812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.211046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.211058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.211353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.211365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.211638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.211651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.211915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.211927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.212249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.212261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.212539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.212551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.212854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.212868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.213067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.213079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.213320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.213332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.213567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.213580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.213866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.213889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.214133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.214145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.214394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.214406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.214730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.741 [2024-07-25 10:44:18.214742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-25 10:44:18.215001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.215015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.215204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.215216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.215523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.215536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.215780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.215793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.216035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.216047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.216293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.216306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.216568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.216580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.216818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.216831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.217085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.217097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.217271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.217284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.217615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.217627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.217915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.217928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.218121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.218133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.218426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.218438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.218718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.218730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.218936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.218948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.219263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.219275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.219580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.219593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.219839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.219853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.220168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.220180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.220417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.220430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.220741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.220754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.221046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.221059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.221244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.221256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.221530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.221543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.221765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.221778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.222036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.222050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.222300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.222313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.222632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.222644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.222950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.222963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.223283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.223295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.223623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.223635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.223959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.223972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.224210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.742 [2024-07-25 10:44:18.224222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-25 10:44:18.224520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.224532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.224839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.224852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.225160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.225172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.225474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.225487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.225800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.225812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.226107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.226119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.226364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.226379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.226674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.226686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.226946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.226960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.227256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.227269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.227457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.227469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.227762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.227775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.228021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.228034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.228286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.228298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.228547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.228559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.228791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.228804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.229004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.229016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.229308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.229321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.229653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.229665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.229953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.229966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.230154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.230167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.230464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.230476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.230719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.230732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.230995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.231008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.231254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.231266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.231602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.231615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.231863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.231876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.232056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.232068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.232302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.232314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.232558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.232570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.232883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.232895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.233211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.233223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.233399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.743 [2024-07-25 10:44:18.233411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.743 qpair failed and we were unable to recover it. 00:29:14.743 [2024-07-25 10:44:18.233660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.233672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.233944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.233956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.234273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.234285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.234562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.234575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.234822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.234834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.235087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.235099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.235441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.235455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.235765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.235778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.236026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.236039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.236279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.236292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.236480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.236492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.236737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.236750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.236995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.237009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.237323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.237336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.237522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.237535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.237849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.237862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.238179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.238192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.238430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.238442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.238701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.238713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.238920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.238933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.239170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.239183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.239502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.239514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.239764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.239777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.239965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.239978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.240225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.240237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.240531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.240543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.240837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.240850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.241179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.241192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.241510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.241523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.241884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.241897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.242085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.242097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.242427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.242439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.242793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.744 [2024-07-25 10:44:18.242805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.744 qpair failed and we were unable to recover it. 00:29:14.744 [2024-07-25 10:44:18.243073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.243086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.243385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.243398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.243646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.243659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.243968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.243981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.244274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.244287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.244568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.244580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.244919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.244932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.245226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.245240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.245502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.245514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.245837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.245850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.246116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.246128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.246391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.246403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.246634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.246646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.246840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.246853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.247106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.247118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.247296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.247309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.247559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.247571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.247886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.247899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.248164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.248176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.248531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.248543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.248843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.248855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.249147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.249159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.249413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.249425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.249680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.249692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.249992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.250004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.250255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.250267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.250576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.250589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.250827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.250840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.251157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.745 [2024-07-25 10:44:18.251169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.745 qpair failed and we were unable to recover it. 00:29:14.745 [2024-07-25 10:44:18.251495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.251508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.251770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.251783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.252091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.252103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.252446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.252458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.252696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.252708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.252981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.252994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.253169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.253181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.253350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.253362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.253674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.253687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.253947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.253959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.254270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.254282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.254487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.254499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.254815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.254828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.255149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.255161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.255492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.255504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.255746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.255758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.256007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.256019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.256189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.256202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.256429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.256443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.256674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.256686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.256943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.256956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.257270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.257283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.257594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.257606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.257786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.257799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.258117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.258129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.258394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.258407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.258585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.258597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.258838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.258851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.259019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.259031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.259273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.259285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.259602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.259614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.259843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.259856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.260106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.746 [2024-07-25 10:44:18.260118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.746 qpair failed and we were unable to recover it. 00:29:14.746 [2024-07-25 10:44:18.260441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.260454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.260629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.260642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.260842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.260855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.261165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.261178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.261424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.261436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.261728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.261740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.262012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.262024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.262292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.262304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.262533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.262545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.262797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.262810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.263047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.263060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.263365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.263377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.263678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.263690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.264081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.264094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.264288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.264300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.264535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.264547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.264803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.264815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.265006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.265018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.265263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.265275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.265463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.265476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.265770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.265782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.266008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.266020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.266259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.266271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.266551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.266564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.266810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.266823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.267143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.267157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.267470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.267482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.267806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.267820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.268133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.268146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.268510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.268522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.268826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.268838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.269140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.269152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.269392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.269404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.747 [2024-07-25 10:44:18.269602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.747 [2024-07-25 10:44:18.269615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.747 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.269786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.269799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.270060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.270073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.270298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.270310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.270566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.270578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.270890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.270902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.271202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.271214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.271486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.271499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.271752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.271765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.272042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.272054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.272350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.272362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.272654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.272666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.273019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.273032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.273222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.273235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.273557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.273569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.273757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.273770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.274094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.274106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.274344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.274356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.274615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.274627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.274883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.274896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.275214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.275227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.275474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.275486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.275726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.275738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.275964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.275977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.276205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.276217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.276395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.276407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.276722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.276735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.276981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.276993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.277225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.277238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.277482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.277495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.277792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.277805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.278077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.278089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.278246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.278261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.278581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.278593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.748 [2024-07-25 10:44:18.278857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.748 [2024-07-25 10:44:18.278870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.748 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.279171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.279183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.279521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.279533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.279798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.279811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.280076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.280088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.280382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.280394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.280648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.280661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.280979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.280992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.281188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.281200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.281483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.281495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.281848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.281860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.282089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.282101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.282278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.282291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.282522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.282535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.282760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.282772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.283045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.283057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.283371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.283383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.283622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.283634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.283957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.283970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.284157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.284170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.284394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.284406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.284578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.284590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.284825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.284838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.285168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.285180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.285529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.285541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.285734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.285747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.285924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.285936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.286197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.286209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.286454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.286466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.286793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.286805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.287100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.287113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.287379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.287392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.287550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.287562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.287857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.287870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.749 qpair failed and we were unable to recover it. 00:29:14.749 [2024-07-25 10:44:18.288128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.749 [2024-07-25 10:44:18.288140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.288457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.288470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.288761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.288774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.288954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.288967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.289159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.289173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.289534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.289546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.289841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.289854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.290168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.290181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.290418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.290430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.290752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.290764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.291062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.291073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.291313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.291326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.291550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.291563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.291811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.291824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.292081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.292094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.292403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.292415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.292655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.292667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.292919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.292931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.293126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.293138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.293437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.293449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.293686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.293698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.293934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.293948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.294264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.294276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.294529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.294541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.294779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.294792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.295015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.295027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.295299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.295311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.295627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.295640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.295918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.295930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.750 [2024-07-25 10:44:18.296268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.750 [2024-07-25 10:44:18.296281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.750 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.296630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.296643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.296930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.296943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.297239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.297252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.297496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.297508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.297754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.297767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.298008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.298020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.298338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.298351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.298676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.298689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.298938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.298951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.299291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.299303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.299656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.299669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.299913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.299926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.300221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.300233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.300550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.300562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.300876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.300891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.301156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.301168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.301417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.301429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.301664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.301676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.301992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.302005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.302195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.302207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.302442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.302454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.302778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.302791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.302968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.302980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.303207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.303219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.303471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.303484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.303659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.303671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.303963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.303976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.304288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.304301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.304460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.304473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.304796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.304808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.305096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.305108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.305265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.305278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.305521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.751 [2024-07-25 10:44:18.305533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.751 qpair failed and we were unable to recover it. 00:29:14.751 [2024-07-25 10:44:18.305697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.305710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.305983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.305995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.306244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.306257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.306522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.306534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.306843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.306856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.307096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.307108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.307293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.307305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.307621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.307633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.307971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.307985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.308144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.308157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.308464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.308476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.308788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.308800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.309057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.309069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.309252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.309264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.309511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.309524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.309838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.309851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.310078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.310090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.310313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.310325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.310640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.310653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.310877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.310890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.311158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.311170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.311525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.311539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.311878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.311891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.312076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.312088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.312334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.312346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.312597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.312609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.312899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.312913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.313158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.313171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.313422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.313434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.313724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.313736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.314003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.314016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.314258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.314270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.314535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.314547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.314851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.752 [2024-07-25 10:44:18.314863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.752 qpair failed and we were unable to recover it. 00:29:14.752 [2024-07-25 10:44:18.315126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.315139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.315380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.315392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.315688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.315700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.315933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.315945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.316240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.316252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.316517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.316529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.316846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.316859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.317154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.317167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.317418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.317430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.317668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.317680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.318040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.318053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.318235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.318247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.318424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.318437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.318675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.318688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.318933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.318946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.319247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.319259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.319497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.319510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.319827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.319840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.320032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.320045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.320343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.320356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.320650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.320663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.320944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.320956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.321147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.321159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.321466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.321479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.321793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.321806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.322055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.322068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.322231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.322243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.322545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.322561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.322816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.322829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.323074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.323086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.323354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.323367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.323618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.323630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.323950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.323963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.753 [2024-07-25 10:44:18.324203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.753 [2024-07-25 10:44:18.324216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.753 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.324566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.324579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.324854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.324867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.325175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.325188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.325550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.325562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.325808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.325820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.326115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.326127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.326446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.326459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.326718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.326732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.326983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.326995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.327193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.327206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.327405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.327418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.327746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.327759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.328001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.328014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.328306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.328318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.328611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.328625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.328824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.328837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.329171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.329183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.329371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.329384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.329674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.329686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.329901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.329914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.330237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.330269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.330541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.330559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.330826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.330845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.331058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.331076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.331429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.331446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.331751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.331769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.332032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.332050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.332257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.332274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.332546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.332563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.332732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.332749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.333051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.333068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.333323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.333339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.333666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.754 [2024-07-25 10:44:18.333683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.754 qpair failed and we were unable to recover it. 00:29:14.754 [2024-07-25 10:44:18.334043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.334061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.334390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.334407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.334651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.334668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.335022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.335039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.335309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.335326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.335587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.335603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.335934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.335951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.336201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.336219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.336419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.336435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.336775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.336792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.336986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.337003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.337250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.337267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.337632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.337649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.337974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.337991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.338264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.338283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.338626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.338643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.338841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.338858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.339115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.339132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.339386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.339405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.339728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.339746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.340104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.340121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.340323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.340339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.340654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.340671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.340920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.340937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.341132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.341148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.341351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.341368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.341738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.341753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.342000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.342013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.342263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.342275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.342442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-07-25 10:44:18.342455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-07-25 10:44:18.342735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.342747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.343082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.343094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.343352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.343364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.343571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.343584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.343783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.343796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.343976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.343989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.344283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.344296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.344502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.344514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.344832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.344845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.345150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.345162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.345372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.345385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.345696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.345708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.345911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.345924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.346123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.346136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.346377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.346389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.346755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.346769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.347024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.347036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.347331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.347344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.347528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.347540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.347735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.347747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.348023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.348036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.348260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.348272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.348513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.348526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.348769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.348781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.348976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.348990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.349101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.349113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.349298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.349311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.349568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.349581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.349767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.349779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.349942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.349954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.350112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.350124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.350284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.350296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.350551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.350564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-07-25 10:44:18.350748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-07-25 10:44:18.350760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.350935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.350948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.351126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.351139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.351309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.351322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.351508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.351522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.351765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.351778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.351954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.351966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.352237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.352249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.352419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.352431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.352610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.352622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.352895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.352908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.353074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.353087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.353340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.353353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.353621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.353633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.353819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.353832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.354004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.354017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.354240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.354252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.354417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.354430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.354651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.354664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.354827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.354839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.355015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.355027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.355255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.355268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.355515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.355527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.355765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.355778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.356016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.356030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.356270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.356283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.356528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.356540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.356836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.356848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.357156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.357169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.357326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.357347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.357580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.357592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.357852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.357866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.358171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.358184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.358413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-07-25 10:44:18.358425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-07-25 10:44:18.358604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.358616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.358786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.358799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.359028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.359040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.359280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.359292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.359449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.359461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.359649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.359662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.359911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.359924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.360138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.360151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.360390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.360403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.360638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.360652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.360826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.360839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.361151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.361163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.361387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.361400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.361571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.361583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.361764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.361776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.362093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.362105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.362334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.362347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.362522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.362534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.362659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.362672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.362915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.362928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.363176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.363188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.363352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.363365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.363557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.363569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.363670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.363682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.363954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.363991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.364298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.364317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.364555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.364572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.364769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.364787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.365023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.365040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.365276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.365292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.365405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.365421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.365586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.365603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.365874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-07-25 10:44:18.365891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae10000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-07-25 10:44:18.366063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.366077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.366325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.366338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.366649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.366662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.366905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.366918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.367083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.367097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.367273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.367286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.367445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.367458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.367797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.367810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.367985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.367997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.368239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.368251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.368480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.368492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.368664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.368676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.368946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.368959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.369214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.369226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.369539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.369551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.369744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.369758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.369918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.369931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.370107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.370119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.370304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.370317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.370609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.370621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.370951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.370964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.371136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.371148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.371376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.371388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.371655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.371667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.371763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.371776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.372024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.372037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.372225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.372238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.372496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.372509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.372834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.372846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.373091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.373104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.373274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.373286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.373479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.373491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.373811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.373823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-07-25 10:44:18.374010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-07-25 10:44:18.374023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.374343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.374355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.374535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.374547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.374917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.374930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.375100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.375112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.375287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.375299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.375527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.375539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.375643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.375655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.375921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.375933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.376107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.376119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.376356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.376368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.376690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.376704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.376934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.376947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.377261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.377274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.377446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.377459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.377699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.377712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.377874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.377887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.378109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.378121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.378334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.378347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.378654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.378667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.378929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.378942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.379206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.379219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.379400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.379412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.379568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.379580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.379810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.379823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.380074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.380086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.380260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.380272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.380510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.380522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.380785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.380797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.381035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-07-25 10:44:18.381047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-07-25 10:44:18.381370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.381383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.381611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.381623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.381863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.381876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.382032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.382045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.382217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.382229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.382475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.382488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.382711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.382727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.382986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.382998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.383226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.383238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.383469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.383482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.383725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.383738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.383911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.383924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.384106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.384119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.384310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.384322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.384554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.384567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.384794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.384806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.384978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.384991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.385161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.385173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.385296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.385309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.385466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.385479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.385749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.385762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.386010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.386024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.386293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.386306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.386561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.386573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.386816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.386829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.387014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.387026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.387199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.387211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.387389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.387402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.387647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.387660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.387814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.387826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.388092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.388105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.388330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.388345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.388529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.388541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.388698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-07-25 10:44:18.388711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-07-25 10:44:18.388986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.388999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.389308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.389321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.389488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.389501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.389676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.389689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.389918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.389930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.390101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.390113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.390357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.390369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.390602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.390616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.390847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.390860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.391022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.391035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.391127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.391139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.391362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.391374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.391687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.391700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.391888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.391901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.392147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.392161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.392383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.392396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.392646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.392659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.392845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.392858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.393032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.393045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.393213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.393226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.393475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.393487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.393666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.393678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.393939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.393951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.394179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.394191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.394435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.394447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.394675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.394687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.395000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.395013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.395191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.395203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.395390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.395403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.395500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.395512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.395766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.395779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.396006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.396019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.396311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.396324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-07-25 10:44:18.396559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-07-25 10:44:18.396571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.396807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.396820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.397059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.397071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.397311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.397324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.397569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.397582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.397853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.397866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.398078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.398090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.398331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.398343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.398508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.398520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.398689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.398701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.398890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.398902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.399179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.399191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.399512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.399524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.399756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.399769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.399998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.400011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.400240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.400252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.400480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.400492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.400680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.400692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.400874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.400886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.401133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.401146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.401382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.401394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.401588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.401603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.401899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.401912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.402151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.402164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.402431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.402443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.402694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.402706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.402869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.402889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.403069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.403082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.403240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.403252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.403423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.403434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.403604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.403616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.403859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.403872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.404110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.404123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.404307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-07-25 10:44:18.404319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-07-25 10:44:18.404572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.404584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.404781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.404794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.404951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.404963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.405184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.405196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.405431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.405444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.405602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.405614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.405773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.405786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.406026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.406039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.406300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.406312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.406555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.406567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.406739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.406751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.406981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.406993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.407189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.407201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.407370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.407383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.407679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.407691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.407866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.407879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.408131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.408143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.408306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.408318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.408621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.408634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.408873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.408887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.409161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.409174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.409333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.409347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.409635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.409649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.409811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.409824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.410093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.410105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.410357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.410370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.410560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.410572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.410799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.410813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.411039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.411051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.411360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.411372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.411547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.411560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.411810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-07-25 10:44:18.411823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-07-25 10:44:18.412074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-07-25 10:44:18.412087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-07-25 10:44:18.412326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-07-25 10:44:18.412338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-07-25 10:44:18.412600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-07-25 10:44:18.412612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-07-25 10:44:18.412939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-07-25 10:44:18.412951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-07-25 10:44:18.413260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-07-25 10:44:18.413273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-07-25 10:44:18.413443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-07-25 10:44:18.413455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-07-25 10:44:18.413633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-07-25 10:44:18.413645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-07-25 10:44:18.413892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-07-25 10:44:18.413905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-07-25 10:44:18.414130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-07-25 10:44:18.414142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-07-25 10:44:18.414389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-07-25 10:44:18.414401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-07-25 10:44:18.414563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-07-25 10:44:18.414575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-07-25 10:44:18.414809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-07-25 10:44:18.414822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-07-25 10:44:18.415007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-07-25 10:44:18.415020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.415294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.415307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.415488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.415503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.415775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.415788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.415956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.415968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.416134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.416146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.416389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.416401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.416640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.416652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.416925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.416939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.417112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.417124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.417438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.417451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.417619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.417632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.417900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.417915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.418159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.418171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.418430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.418442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.418686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.418698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.418985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.418998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.419240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.419253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.419420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.419432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.419667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.419680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.419934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.419947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.420131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.420144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.420456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.420469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.420698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.420712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.421032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.421045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.421234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.421246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-07-25 10:44:18.421542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-07-25 10:44:18.421554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.421781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.421800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.422046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.422059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.422289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.422302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.422526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.422539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.422712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.422728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.422909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.422923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.423151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.423163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.423324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.423336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.423574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.423587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.423827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.423839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.424014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.424027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.424198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.424210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.424455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.424467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.424631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.424644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.424804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.424817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.425110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.425122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.425438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.425450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.425621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.425633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.425786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.425799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.426046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.426058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.426380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.426392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.426575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.426589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.426847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.426860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.427153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.427166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.427382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.427395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.427620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.427633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.427861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.427874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.428106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.428118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.428361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.428373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.428609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.428622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.428875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.428890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.429215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-07-25 10:44:18.429228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-07-25 10:44:18.429410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.429423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.429537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.429550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.429790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.429803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.430033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.430047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.430285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.430299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.430494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.430507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.430739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.430752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.430862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.430875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.431059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.431071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.431295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.431307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.431475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.431487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.431668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.431680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.431967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.431980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.432139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.432152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.432393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.432405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.432580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.432592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.432772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.432785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.432975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.432988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.433214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.433227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.433523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.433535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.433698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.433711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.433874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.433887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.434048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.434060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.434379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.434392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.434642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.434655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.434911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.434924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.435097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.435109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.435294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.435306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.435468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.435481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.435775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.435788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.435986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.435999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.436190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.436203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.436455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-07-25 10:44:18.436469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-07-25 10:44:18.436658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.436670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.436849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.436863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.437087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.437100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.437394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.437407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.437705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.437731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.437976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.437988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.438163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.438176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.438468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.438481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.438703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.438720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.438881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.438894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.439070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.439083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.439350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.439365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.439531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.439543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.439723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.439735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.439961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.439974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.440132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.440144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.440389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.440403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.440580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.440593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.440828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.440842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.441026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.441038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.441355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.441367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.441538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.441550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.441777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.441790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.442091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.442104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.442295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.442308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.442468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.442481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.442658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.442671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.442873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.442886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.443133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.443146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.443444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.443457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.443630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.443643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.443808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.443820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.444075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-07-25 10:44:18.444087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-07-25 10:44:18.444354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.444367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.444545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.444557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.444810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.444823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.445009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.445022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.445247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.445260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.445419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.445432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.445668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.445682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.445848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.445861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.446017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.446029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.446210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.446222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.446449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.446462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.446720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.446732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.446956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.446969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.447209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.447222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.447465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.447477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.447649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.447661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.447999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.448012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.448183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.448196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.448439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.448453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.448621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.448634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.448860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.448873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.449050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.449062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.449245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.449257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.449429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.449441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.449620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.449632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.449804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.449817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.450052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.450065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.450295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.450308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.450563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.450576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.450773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.450786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.451033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.451045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.451310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.451322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.451438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.451452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-07-25 10:44:18.451625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-07-25 10:44:18.451637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.451866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.451879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.452170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.452184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.452446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.452459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.452630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.452644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.452896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.452909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.453225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.453237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.453604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.453616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.453867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.453879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.454132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.454144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.454312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.454325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.454564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.454576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.454838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.454851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.455085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.455098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.455417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.455429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.455651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.455663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.455917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.455930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.456226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.456238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.456411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.456424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.456698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.456710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.456897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.456910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.457152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.457166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.457338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.457352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.457578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.457591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.457885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.457898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.458126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.458140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.458382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.458395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.458620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-07-25 10:44:18.458633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-07-25 10:44:18.458856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.458870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.459105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.459119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.459385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.459397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.459565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.459579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.459765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.459778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.459965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.459978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.460148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.460160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.460397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.460410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.460638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.460650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.460806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.460819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.461009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.461021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.461193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.461205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.461456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.461468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.461646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.461659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.461921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.461934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.462164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.462177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.462366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.462379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.462568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.462581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.462741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.462754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.462943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.462956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.463216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.463229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.463399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.463411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.463611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.463624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.463941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.463953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.464208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.464221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.464449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.464461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.464716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.464730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.464974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.464987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.465162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.465175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.465405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.465417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.465667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.465680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.465856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.465868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.466110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-07-25 10:44:18.466124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-07-25 10:44:18.466416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.466429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.466756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.466769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.467003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.467016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.467174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.467187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.467366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.467386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.467684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.467696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.467885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.467899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.468123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.468136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.468395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.468408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.468606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.468619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.468845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.468858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.469090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.469103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.469277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.469290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.469448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.469461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.469641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.469653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.469881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.469894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.470068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.470080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.470374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.470386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.470633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.470645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.470901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.470915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.471207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.471220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.471480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.471493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.471790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.471802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.471983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.471995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.472248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.472260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.472500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.472512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.472807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.472819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.473061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.473074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.473367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.473380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.473483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.473496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.473656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.473669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.473908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.473921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-07-25 10:44:18.474099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-07-25 10:44:18.474111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.474287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.474299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.474460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.474472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.474730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.474744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.474982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.474994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.475223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.475236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.475499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.475512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.475749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.475763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.475930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.475942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.476180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.476192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.476369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.476381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.476555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.476568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.476816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.476831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.477056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.477069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.477304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.477316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.477492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.477506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.477769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.477783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.477945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.477958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.478187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.478200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.478365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.478377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.478536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.478549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.478746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.478759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.478936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.478949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.479120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.479132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.479422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.479435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.479662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.479675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.479976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.479989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.480091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.480104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.480274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.480286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.480550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.480563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.480737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.480750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.480975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.480988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-07-25 10:44:18.481304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-07-25 10:44:18.481317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.481488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.481501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.481737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.481750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.481907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.481920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.482078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.482091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.482316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.482330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.482485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.482498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.482678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.482690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.482922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.482935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.483103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.483116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.483342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.483354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.483596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.483608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.483781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.483793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.484019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.484031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.484208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.484220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.484457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.484469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.484694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.484706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.484884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.484897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.485075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.485087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.485350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.485362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.485603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.485617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.485849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.485862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.486109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.486121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.486305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.486317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.486576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.486589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.486836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.486848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.487087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.487100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.487408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.487420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.487646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.487659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.487919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.487931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.488116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.488128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.488426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.488438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.488768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.488780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.489071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-07-25 10:44:18.489083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-07-25 10:44:18.489377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.489390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.489717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.489732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.489962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.489974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.490219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.490231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.490556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.490568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.490889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.490901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.491192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.491204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.491439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.491451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.491770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.491783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.492035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.492047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.492303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.492315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.492557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.492569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.492875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.492887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.493128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.493140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.493378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.493390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.493683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.493695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.493964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.493977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.494160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.494172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.494501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.494513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.494755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.494767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.495087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.495100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.495290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.495302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.495559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.495571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.495868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.495881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.496142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.496153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.496401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.496413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-07-25 10:44:18.496673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-07-25 10:44:18.496687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.496925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.496937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.497120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.497132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.497449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.497461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.497754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.497767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.497952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.497964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.498140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.498152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.498317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.498329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.498496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.498508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.498813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.498825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.499016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.499028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.499209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.499221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.499395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.499407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.499631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.499643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.499881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.499894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.500147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.500160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.500327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.500339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.500582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.500594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.500770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.500782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.500958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.500970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.501208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.501220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.501378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.501390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.501614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.501626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.501869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.501881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.502109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.502122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.502372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.502384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.502620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.502632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.502823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.502836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.503116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.503128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-07-25 10:44:18.503404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-07-25 10:44:18.503416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.503679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.503691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.503935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.503947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.504139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.504150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.504371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.504384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.504626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.504637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.504860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.504872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.505141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.505153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.505339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.505351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.505528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.505540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.505763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.505775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.506016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.506028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.506254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.506266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.506428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.506440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.506732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.506744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.506997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.507008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.507171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.507185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.507501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.507513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.507711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.507727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.507894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.507906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.508093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.508105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.508419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.508431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.508547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.508559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.508802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.508814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.509041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.509053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.509294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.509306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.509528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.509540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.509857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.509870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.510099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.510111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.510382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.510394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-07-25 10:44:18.510634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-07-25 10:44:18.510646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.510889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.510902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.511205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.511217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.511500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.511512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.511853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.511866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.512052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.512064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.512317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.512329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.512653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.512665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.512934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.512948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.513195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.513207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.513524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.513536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.513741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.513753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.514056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.514069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.514472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.514485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.514730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.514743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.514989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.515001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:15.053 [2024-07-25 10:44:18.515227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.515241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:15.053 [2024-07-25 10:44:18.515537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.515551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:15.053 [2024-07-25 10:44:18.515878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.515892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:15.053 [2024-07-25 10:44:18.516137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.516151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.053 [2024-07-25 10:44:18.516405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.516419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.516724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.516737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.517062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.517075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.517348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.517360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.517680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.517693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.517977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.517991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.518261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.518274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.518524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.518536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.518729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.518742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.518977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.518989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.519216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.519229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-07-25 10:44:18.519488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-07-25 10:44:18.519501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.519822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.519835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.520061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.520073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.520321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.520333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.520589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.520602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.520848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.520860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.521212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.521227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.521476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.521489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.521731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.521743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.521943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.521955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.522203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.522215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.522461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.522474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.522769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.522782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.523053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.523065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.523381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.523393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.523662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.523674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.524007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.524019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.524267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.524280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.524585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.524597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.524893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.524905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.525221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.525234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.525468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.525480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.525695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.525707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.525921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.525934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.526137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.526149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.526486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.526498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.526752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.526765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.527035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.527047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.527384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.527398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.527638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.527650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.527924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.527936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.528211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.528225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.528499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.528511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-07-25 10:44:18.528836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-07-25 10:44:18.528849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.529163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.529175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.529427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.529440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.529770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.529783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.530029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.530043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.530339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.530352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.530591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.530606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.530909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.530922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.531163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.531176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.531529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.531542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.531770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.531783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.532022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.532034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.532276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.532288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.532531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.532543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.532796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.532808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.533056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.533069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.533333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.533345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.533597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.533609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.533848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.533861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.534103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.534116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.534362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.534374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.534694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.534706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.534966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.534978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.535164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.535177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.535504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.535517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.535817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.535831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.535960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.535972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.536291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.536303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.536551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.536563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.536860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.536873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.537057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.537069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.537364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.537376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.537535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-07-25 10:44:18.537548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-07-25 10:44:18.537846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.537859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.538173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.538186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.538384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.538400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.538696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.538709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.538989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.539002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.539353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.539365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.539688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.539702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.539956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.539968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.540169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.540182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.540429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.540442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.540805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.540817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.540951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.540963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.541167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.541179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.541500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.541513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.541739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.541753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.542030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.542043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.542353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.542365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.542728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.542741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.542949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.542962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.543164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.543176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.543470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.543483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.543747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.543759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.544004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.544016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.544334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.544346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.544571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.544583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.544848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.544860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.545158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.545170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.545491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.545504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.545702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-07-25 10:44:18.545717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-07-25 10:44:18.545975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.545987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.546168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.546180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.546418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.546430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.546752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.546765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.546956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.546968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.547192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.547204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.547418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.547430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.547699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.547711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.547987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.548000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.548227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.548239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.548475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.548487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.548794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.548807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.549054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.549066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.549265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.549277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.549649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.549663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.549906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.549918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.550108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.550121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.550311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.550322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.550550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.550563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.550890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.550904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.551151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.551162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.551448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.551460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.551776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.551789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.551968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.551981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.552229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.552242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.552495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.552507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.552850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.552862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.553110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.553121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.553361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.553374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.553647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.553659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.553853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.553865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.554152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.554164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.554436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.554448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-07-25 10:44:18.554726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-07-25 10:44:18.554739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.555050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.555064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.555366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.555379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.555617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.555629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.555952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.555964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.556153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.556165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.556458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.556471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.556793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.556808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.557004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.557018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.557313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.557325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.557648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.557660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.557903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.557916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.558155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.558168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.558419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.558433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.558690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.558702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.559061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.559088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.559433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.559451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.559795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.559813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.560120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.560137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.560494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.560510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.560835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.560853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.561108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.058 [2024-07-25 10:44:18.561123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.561381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.561393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:15.058 [2024-07-25 10:44:18.561615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.561629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.058 [2024-07-25 10:44:18.561947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.561961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.058 [2024-07-25 10:44:18.562225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.562238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.562520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.562532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.562834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.562846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.563145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.563157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.563490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.563502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.563797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.563809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.564034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.564047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-07-25 10:44:18.564295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-07-25 10:44:18.564307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.564573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.564584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.564786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.564799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.565060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.565072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.565310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.565322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.565546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.565559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.565853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.565865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.566182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.566194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.566443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.566455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.566699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.566710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.567040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.567052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.567302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.567315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.567616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.567628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.567905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.567919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.568158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.568170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.568461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.568473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.568732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.568744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.568999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.569011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.569308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.569321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.569624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.569636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.569953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.569966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.570149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.570161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.570507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.570520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.570832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.570845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.571140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.571151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.571502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.571514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.571853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.571866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.572097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.572109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.572353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.572365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.572610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.572622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.572930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.572942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.573256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.573269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.573645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.573658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-07-25 10:44:18.573955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-07-25 10:44:18.573968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.574285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.574298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.574563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.574576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.574823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.574836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.575156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.575169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.575344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.575357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.575676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.575688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.576061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.576085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.576387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.576404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.576711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.576734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.577065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.577083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.577433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.577451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.577788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.577807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.578045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.578062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.578332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.578350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.578656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.578673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd1a0 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.579049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.579066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.579373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.579386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.579655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.579667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.579979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.579991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.580313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.580325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.580656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.580668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 Malloc0 00:29:15.060 [2024-07-25 10:44:18.580992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.581005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.581309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.581321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.060 [2024-07-25 10:44:18.581639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.581652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.581973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:15.060 [2024-07-25 10:44:18.581985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.060 [2024-07-25 10:44:18.582296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.582309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.582482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.582494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.060 [2024-07-25 10:44:18.582812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.582825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.583052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.583064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.583261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.583273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.583611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.583623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-07-25 10:44:18.583938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-07-25 10:44:18.583951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.584190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.584202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.584533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.584545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.584840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.584853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.585168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.585180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.585420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.585432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.585751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.585764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.586070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.586082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.586374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.586385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.586702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.586717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.586971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.586983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.587300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.587312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.587619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.587631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.587944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.587958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.588197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.588209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.588326] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.061 [2024-07-25 10:44:18.588425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.588437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.588598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.588610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.588935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.588947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.589190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.589202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.589497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.589508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.589824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.589836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.590167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.590179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.590445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.590457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.590789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.590801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.591103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.591116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.591429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.591441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.591779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.591795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.591965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-07-25 10:44:18.591977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-07-25 10:44:18.592293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.592305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.592527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.592539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.592865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.592878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.593066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.593078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.593418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.593430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.593680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.593693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.594016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.594028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.594333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.594345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.594616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.594628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.594892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.594904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.595143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.595155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.595384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.595397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.595711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.595726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.596046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.596058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.596313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.596325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.596643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.596655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.596961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.596974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.062 [2024-07-25 10:44:18.597269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.597282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:15.062 [2024-07-25 10:44:18.597600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.597612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.062 [2024-07-25 10:44:18.597840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.597853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.062 [2024-07-25 10:44:18.598174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.598187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.598428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.598440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.598761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.598774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.598999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.599013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.599217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.599229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.599549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.599561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.599893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.599905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.600202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.600214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.600464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.600476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.600791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.600803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.601064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.601075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-07-25 10:44:18.601397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-07-25 10:44:18.601408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.601587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.601600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.601915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.601927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.602222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.602234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.602557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.602569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.602883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.602895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.603268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.603279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.603599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.603611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.603923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.603935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.604277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.604289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.604600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.604612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.604838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.604851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.605168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.605180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.605409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.605421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.605664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.605675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.605902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.605915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.606236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.606248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.606509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.606520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.606752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.606764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.606999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.607011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.607344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.607355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.607525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.607537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.607858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.607870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.608198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.608210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.608451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.608463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.608775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.608788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.609101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.609114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.609435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.609447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:15.063 [2024-07-25 10:44:18.609741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.609753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.063 [2024-07-25 10:44:18.609990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.610002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.063 [2024-07-25 10:44:18.610303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.610316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-07-25 10:44:18.610548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-07-25 10:44:18.610560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.610853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.610865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.611180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.611192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.611498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.611510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.611824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.611837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.612154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.612166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.612464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.612476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.612738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.612750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.613064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.613075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.613396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.613409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.613649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.613661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.613973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.613985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.614230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.614242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.614509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.614521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.614760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.614772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.615095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.615107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.615420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.615432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.615766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.615778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.616017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.616029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.616339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.616351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.616609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.616622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.616937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.616950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.064 [2024-07-25 10:44:18.617259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.617271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.064 [2024-07-25 10:44:18.617587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.617599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.064 [2024-07-25 10:44:18.617840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.617853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.064 [2024-07-25 10:44:18.618161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.618174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.618412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.618424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.618736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.618749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.619063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.619075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.619391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.619403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.619729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.619742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-07-25 10:44:18.620010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-07-25 10:44:18.620022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.065 [2024-07-25 10:44:18.620347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-07-25 10:44:18.620359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-07-25 10:44:18.620612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-07-25 10:44:18.620606] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.065 [2024-07-25 10:44:18.620624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae08000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.065 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:15.065 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.065 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.065 [2024-07-25 10:44:18.628979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.065 [2024-07-25 10:44:18.629077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.065 [2024-07-25 10:44:18.629098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.065 [2024-07-25 10:44:18.629111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.065 [2024-07-25 10:44:18.629120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.065 [2024-07-25 10:44:18.629146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.065 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 4055590 00:29:15.065 [2024-07-25 10:44:18.638906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.065 [2024-07-25 10:44:18.638995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.065 [2024-07-25 10:44:18.639015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.065 [2024-07-25 10:44:18.639025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.065 [2024-07-25 10:44:18.639034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.065 [2024-07-25 10:44:18.639055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-07-25 10:44:18.648943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.065 [2024-07-25 10:44:18.649026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.065 [2024-07-25 10:44:18.649045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.065 [2024-07-25 10:44:18.649055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.065 [2024-07-25 10:44:18.649065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.065 [2024-07-25 10:44:18.649084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-07-25 10:44:18.658863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.065 [2024-07-25 10:44:18.658948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.065 [2024-07-25 10:44:18.658967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.065 [2024-07-25 10:44:18.658977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.065 [2024-07-25 10:44:18.658985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.065 [2024-07-25 10:44:18.659004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-07-25 10:44:18.668927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.065 [2024-07-25 10:44:18.669011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.065 [2024-07-25 10:44:18.669029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.065 [2024-07-25 10:44:18.669039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.065 [2024-07-25 10:44:18.669050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.065 [2024-07-25 10:44:18.669069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-07-25 10:44:18.678951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.065 [2024-07-25 10:44:18.679032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.065 [2024-07-25 10:44:18.679050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.065 [2024-07-25 10:44:18.679060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.065 [2024-07-25 10:44:18.679068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.065 [2024-07-25 10:44:18.679087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-07-25 10:44:18.688980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.065 [2024-07-25 10:44:18.689060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.065 [2024-07-25 10:44:18.689079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.065 [2024-07-25 10:44:18.689089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.065 [2024-07-25 10:44:18.689098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.065 [2024-07-25 10:44:18.689116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-07-25 10:44:18.699197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.065 [2024-07-25 10:44:18.699277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.065 [2024-07-25 10:44:18.699295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.065 [2024-07-25 10:44:18.699304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.065 [2024-07-25 10:44:18.699312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.065 [2024-07-25 10:44:18.699331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-07-25 10:44:18.709015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.065 [2024-07-25 10:44:18.709096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.065 [2024-07-25 10:44:18.709115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.065 [2024-07-25 10:44:18.709124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.065 [2024-07-25 10:44:18.709134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.065 [2024-07-25 10:44:18.709152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-07-25 10:44:18.719018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.065 [2024-07-25 10:44:18.719101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.065 [2024-07-25 10:44:18.719119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.065 [2024-07-25 10:44:18.719129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.065 [2024-07-25 10:44:18.719138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.065 [2024-07-25 10:44:18.719156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.326 [2024-07-25 10:44:18.729072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.326 [2024-07-25 10:44:18.729153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.326 [2024-07-25 10:44:18.729171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.326 [2024-07-25 10:44:18.729181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.326 [2024-07-25 10:44:18.729190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.326 [2024-07-25 10:44:18.729208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.326 qpair failed and we were unable to recover it. 00:29:15.326 [2024-07-25 10:44:18.739068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.326 [2024-07-25 10:44:18.739179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.326 [2024-07-25 10:44:18.739199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.326 [2024-07-25 10:44:18.739209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.326 [2024-07-25 10:44:18.739217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.326 [2024-07-25 10:44:18.739236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.326 qpair failed and we were unable to recover it. 00:29:15.326 [2024-07-25 10:44:18.749138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.326 [2024-07-25 10:44:18.749222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.326 [2024-07-25 10:44:18.749240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.326 [2024-07-25 10:44:18.749250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.326 [2024-07-25 10:44:18.749258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.326 [2024-07-25 10:44:18.749277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.326 qpair failed and we were unable to recover it. 00:29:15.326 [2024-07-25 10:44:18.759163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.327 [2024-07-25 10:44:18.759239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.327 [2024-07-25 10:44:18.759258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.327 [2024-07-25 10:44:18.759271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.327 [2024-07-25 10:44:18.759279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.327 [2024-07-25 10:44:18.759299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.327 qpair failed and we were unable to recover it. 00:29:15.327 [2024-07-25 10:44:18.769169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.327 [2024-07-25 10:44:18.769252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.327 [2024-07-25 10:44:18.769270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.327 [2024-07-25 10:44:18.769280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.327 [2024-07-25 10:44:18.769289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.327 [2024-07-25 10:44:18.769308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.327 qpair failed and we were unable to recover it. 00:29:15.327 [2024-07-25 10:44:18.779182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.327 [2024-07-25 10:44:18.779288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.327 [2024-07-25 10:44:18.779306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.327 [2024-07-25 10:44:18.779316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.327 [2024-07-25 10:44:18.779325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.327 [2024-07-25 10:44:18.779343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.327 qpair failed and we were unable to recover it. 00:29:15.327 [2024-07-25 10:44:18.789182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.327 [2024-07-25 10:44:18.789270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.327 [2024-07-25 10:44:18.789288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.327 [2024-07-25 10:44:18.789298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.327 [2024-07-25 10:44:18.789307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.327 [2024-07-25 10:44:18.789326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.327 qpair failed and we were unable to recover it. 00:29:15.327 [2024-07-25 10:44:18.799306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.327 [2024-07-25 10:44:18.799414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.327 [2024-07-25 10:44:18.799433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.327 [2024-07-25 10:44:18.799442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.327 [2024-07-25 10:44:18.799452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.327 [2024-07-25 10:44:18.799471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.327 qpair failed and we were unable to recover it. 00:29:15.327 [2024-07-25 10:44:18.809236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.327 [2024-07-25 10:44:18.809320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.327 [2024-07-25 10:44:18.809338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.327 [2024-07-25 10:44:18.809348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.327 [2024-07-25 10:44:18.809357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.327 [2024-07-25 10:44:18.809377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.327 qpair failed and we were unable to recover it. 00:29:15.327 [2024-07-25 10:44:18.819287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.327 [2024-07-25 10:44:18.819364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.327 [2024-07-25 10:44:18.819382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.327 [2024-07-25 10:44:18.819392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.327 [2024-07-25 10:44:18.819401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.327 [2024-07-25 10:44:18.819420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.327 qpair failed and we were unable to recover it. 00:29:15.327 [2024-07-25 10:44:18.829324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.327 [2024-07-25 10:44:18.829407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.327 [2024-07-25 10:44:18.829425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.327 [2024-07-25 10:44:18.829434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.327 [2024-07-25 10:44:18.829443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.327 [2024-07-25 10:44:18.829461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.327 qpair failed and we were unable to recover it. 00:29:15.327 [2024-07-25 10:44:18.839394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.327 [2024-07-25 10:44:18.839473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.327 [2024-07-25 10:44:18.839490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.327 [2024-07-25 10:44:18.839500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.327 [2024-07-25 10:44:18.839509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.327 [2024-07-25 10:44:18.839527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.327 qpair failed and we were unable to recover it. 00:29:15.327 [2024-07-25 10:44:18.849409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.327 [2024-07-25 10:44:18.849492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.327 [2024-07-25 10:44:18.849514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.327 [2024-07-25 10:44:18.849524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.327 [2024-07-25 10:44:18.849532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.327 [2024-07-25 10:44:18.849551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.327 qpair failed and we were unable to recover it. 00:29:15.327 [2024-07-25 10:44:18.859411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.327 [2024-07-25 10:44:18.859492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.327 [2024-07-25 10:44:18.859511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.327 [2024-07-25 10:44:18.859520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.327 [2024-07-25 10:44:18.859529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.327 [2024-07-25 10:44:18.859547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.328 qpair failed and we were unable to recover it. 00:29:15.328 [2024-07-25 10:44:18.869614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.328 [2024-07-25 10:44:18.869728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.328 [2024-07-25 10:44:18.869746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.328 [2024-07-25 10:44:18.869755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.328 [2024-07-25 10:44:18.869764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.328 [2024-07-25 10:44:18.869783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.328 qpair failed and we were unable to recover it. 00:29:15.328 [2024-07-25 10:44:18.879568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.328 [2024-07-25 10:44:18.879650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.328 [2024-07-25 10:44:18.879668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.328 [2024-07-25 10:44:18.879678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.328 [2024-07-25 10:44:18.879686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.328 [2024-07-25 10:44:18.879704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.328 qpair failed and we were unable to recover it. 00:29:15.328 [2024-07-25 10:44:18.889594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.328 [2024-07-25 10:44:18.889675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.328 [2024-07-25 10:44:18.889693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.328 [2024-07-25 10:44:18.889703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.328 [2024-07-25 10:44:18.889712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.328 [2024-07-25 10:44:18.889737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.328 qpair failed and we were unable to recover it. 00:29:15.328 [2024-07-25 10:44:18.899569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.328 [2024-07-25 10:44:18.899655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.328 [2024-07-25 10:44:18.899673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.328 [2024-07-25 10:44:18.899682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.328 [2024-07-25 10:44:18.899691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.328 [2024-07-25 10:44:18.899709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.328 qpair failed and we were unable to recover it. 00:29:15.328 [2024-07-25 10:44:18.909595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.328 [2024-07-25 10:44:18.909678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.328 [2024-07-25 10:44:18.909697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.328 [2024-07-25 10:44:18.909707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.328 [2024-07-25 10:44:18.909720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.328 [2024-07-25 10:44:18.909739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.328 qpair failed and we were unable to recover it. 00:29:15.328 [2024-07-25 10:44:18.919634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.328 [2024-07-25 10:44:18.919718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.328 [2024-07-25 10:44:18.919736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.328 [2024-07-25 10:44:18.919746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.328 [2024-07-25 10:44:18.919754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.328 [2024-07-25 10:44:18.919773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.328 qpair failed and we were unable to recover it. 00:29:15.328 [2024-07-25 10:44:18.929670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.328 [2024-07-25 10:44:18.929752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.328 [2024-07-25 10:44:18.929770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.328 [2024-07-25 10:44:18.929780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.328 [2024-07-25 10:44:18.929788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.328 [2024-07-25 10:44:18.929806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.328 qpair failed and we were unable to recover it. 00:29:15.328 [2024-07-25 10:44:18.939639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.328 [2024-07-25 10:44:18.939727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.328 [2024-07-25 10:44:18.939750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.328 [2024-07-25 10:44:18.939760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.328 [2024-07-25 10:44:18.939768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.328 [2024-07-25 10:44:18.939786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.328 qpair failed and we were unable to recover it. 00:29:15.328 [2024-07-25 10:44:18.949678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.328 [2024-07-25 10:44:18.949843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.328 [2024-07-25 10:44:18.949861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.328 [2024-07-25 10:44:18.949870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.328 [2024-07-25 10:44:18.949879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.328 [2024-07-25 10:44:18.949897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.328 qpair failed and we were unable to recover it. 00:29:15.328 [2024-07-25 10:44:18.959701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.328 [2024-07-25 10:44:18.959781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.328 [2024-07-25 10:44:18.959799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.328 [2024-07-25 10:44:18.959809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.328 [2024-07-25 10:44:18.959817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.328 [2024-07-25 10:44:18.959835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.328 qpair failed and we were unable to recover it. 00:29:15.328 [2024-07-25 10:44:18.969751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.328 [2024-07-25 10:44:18.969830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.328 [2024-07-25 10:44:18.969848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.328 [2024-07-25 10:44:18.969857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.329 [2024-07-25 10:44:18.969866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.329 [2024-07-25 10:44:18.969885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.329 qpair failed and we were unable to recover it. 00:29:15.329 [2024-07-25 10:44:18.979764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.329 [2024-07-25 10:44:18.979885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.329 [2024-07-25 10:44:18.979903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.329 [2024-07-25 10:44:18.979913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.329 [2024-07-25 10:44:18.979921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.329 [2024-07-25 10:44:18.979943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.329 qpair failed and we were unable to recover it. 00:29:15.329 [2024-07-25 10:44:18.989776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.329 [2024-07-25 10:44:18.989862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.329 [2024-07-25 10:44:18.989879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.329 [2024-07-25 10:44:18.989889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.329 [2024-07-25 10:44:18.989897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.329 [2024-07-25 10:44:18.989916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.329 qpair failed and we were unable to recover it. 00:29:15.329 [2024-07-25 10:44:18.999829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.329 [2024-07-25 10:44:18.999907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.329 [2024-07-25 10:44:18.999925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.329 [2024-07-25 10:44:18.999935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.329 [2024-07-25 10:44:18.999943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.329 [2024-07-25 10:44:18.999962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.329 qpair failed and we were unable to recover it. 00:29:15.329 [2024-07-25 10:44:19.009867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.329 [2024-07-25 10:44:19.009950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.329 [2024-07-25 10:44:19.009968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.329 [2024-07-25 10:44:19.009977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.329 [2024-07-25 10:44:19.009986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.329 [2024-07-25 10:44:19.010004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.329 qpair failed and we were unable to recover it. 00:29:15.329 [2024-07-25 10:44:19.019887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.329 [2024-07-25 10:44:19.019967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.329 [2024-07-25 10:44:19.019985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.329 [2024-07-25 10:44:19.019995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.329 [2024-07-25 10:44:19.020003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.329 [2024-07-25 10:44:19.020022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.329 qpair failed and we were unable to recover it. 00:29:15.600 [2024-07-25 10:44:19.029913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.600 [2024-07-25 10:44:19.029999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.600 [2024-07-25 10:44:19.030017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.600 [2024-07-25 10:44:19.030027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.600 [2024-07-25 10:44:19.030036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.600 [2024-07-25 10:44:19.030054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.600 qpair failed and we were unable to recover it. 00:29:15.600 [2024-07-25 10:44:19.039929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.600 [2024-07-25 10:44:19.040088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.600 [2024-07-25 10:44:19.040107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.600 [2024-07-25 10:44:19.040116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.600 [2024-07-25 10:44:19.040125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.600 [2024-07-25 10:44:19.040144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.600 qpair failed and we were unable to recover it. 00:29:15.600 [2024-07-25 10:44:19.049900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.600 [2024-07-25 10:44:19.049994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.600 [2024-07-25 10:44:19.050012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.600 [2024-07-25 10:44:19.050022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.600 [2024-07-25 10:44:19.050030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.600 [2024-07-25 10:44:19.050049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.600 qpair failed and we were unable to recover it. 00:29:15.600 [2024-07-25 10:44:19.059968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.600 [2024-07-25 10:44:19.060046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.600 [2024-07-25 10:44:19.060064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.600 [2024-07-25 10:44:19.060074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.600 [2024-07-25 10:44:19.060083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.600 [2024-07-25 10:44:19.060100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.600 qpair failed and we were unable to recover it. 00:29:15.600 [2024-07-25 10:44:19.069996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.600 [2024-07-25 10:44:19.070079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.600 [2024-07-25 10:44:19.070097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.600 [2024-07-25 10:44:19.070107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.600 [2024-07-25 10:44:19.070118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.600 [2024-07-25 10:44:19.070137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.600 qpair failed and we were unable to recover it. 00:29:15.600 [2024-07-25 10:44:19.080019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.600 [2024-07-25 10:44:19.080092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.600 [2024-07-25 10:44:19.080110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.600 [2024-07-25 10:44:19.080120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.600 [2024-07-25 10:44:19.080129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.600 [2024-07-25 10:44:19.080146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.600 qpair failed and we were unable to recover it. 00:29:15.600 [2024-07-25 10:44:19.090093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.600 [2024-07-25 10:44:19.090200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.600 [2024-07-25 10:44:19.090218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.600 [2024-07-25 10:44:19.090227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.600 [2024-07-25 10:44:19.090236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.600 [2024-07-25 10:44:19.090254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.600 qpair failed and we were unable to recover it. 00:29:15.600 [2024-07-25 10:44:19.100080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.600 [2024-07-25 10:44:19.100166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.600 [2024-07-25 10:44:19.100184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.600 [2024-07-25 10:44:19.100193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.100202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.100219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.110120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.110200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.110218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.110228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.110236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.110255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.120176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.120261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.120279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.120288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.120297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.120315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.130084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.130332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.130352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.130362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.130371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.130390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.140178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.140260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.140277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.140287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.140295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.140313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.150237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.150318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.150336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.150345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.150354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.150372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.160205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.160285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.160303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.160316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.160324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.160343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.170286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.170365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.170383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.170393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.170401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.170420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.180294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.180376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.180394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.180404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.180413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.180431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.190362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.190445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.190463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.190473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.190482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.190500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.200413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.200493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.200511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.200520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.200529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.200547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.210385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.210461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.210479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.210488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.210497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.210515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.220400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.220484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.220501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.220511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.220520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.220539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.230491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.230608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.230626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.230635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.230644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.230662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.240477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.240556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.240574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.240584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.240592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.240611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.250535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.250610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.250627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.250640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.250648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.250667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.260519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.260605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.260624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.260634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.260642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.260661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.270584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.601 [2024-07-25 10:44:19.270671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.601 [2024-07-25 10:44:19.270689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.601 [2024-07-25 10:44:19.270698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.601 [2024-07-25 10:44:19.270707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.601 [2024-07-25 10:44:19.270730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.601 qpair failed and we were unable to recover it. 00:29:15.601 [2024-07-25 10:44:19.280538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.602 [2024-07-25 10:44:19.280617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.602 [2024-07-25 10:44:19.280635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.602 [2024-07-25 10:44:19.280645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.602 [2024-07-25 10:44:19.280654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.602 [2024-07-25 10:44:19.280672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.602 qpair failed and we were unable to recover it. 00:29:15.602 [2024-07-25 10:44:19.290626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.602 [2024-07-25 10:44:19.290733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.602 [2024-07-25 10:44:19.290751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.602 [2024-07-25 10:44:19.290762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.602 [2024-07-25 10:44:19.290770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.602 [2024-07-25 10:44:19.290789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.602 qpair failed and we were unable to recover it. 00:29:15.862 [2024-07-25 10:44:19.300643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.862 [2024-07-25 10:44:19.300741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.862 [2024-07-25 10:44:19.300759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.862 [2024-07-25 10:44:19.300769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.862 [2024-07-25 10:44:19.300778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.862 [2024-07-25 10:44:19.300796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.862 qpair failed and we were unable to recover it. 00:29:15.862 [2024-07-25 10:44:19.310688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.862 [2024-07-25 10:44:19.310775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.862 [2024-07-25 10:44:19.310793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.862 [2024-07-25 10:44:19.310802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.862 [2024-07-25 10:44:19.310811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.862 [2024-07-25 10:44:19.310829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.862 qpair failed and we were unable to recover it. 00:29:15.862 [2024-07-25 10:44:19.320721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.862 [2024-07-25 10:44:19.320802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.862 [2024-07-25 10:44:19.320820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.862 [2024-07-25 10:44:19.320830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.862 [2024-07-25 10:44:19.320839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.862 [2024-07-25 10:44:19.320858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.862 qpair failed and we were unable to recover it. 00:29:15.862 [2024-07-25 10:44:19.330758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.862 [2024-07-25 10:44:19.330832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.862 [2024-07-25 10:44:19.330850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.862 [2024-07-25 10:44:19.330860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.862 [2024-07-25 10:44:19.330869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.862 [2024-07-25 10:44:19.330888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.862 qpair failed and we were unable to recover it. 00:29:15.862 [2024-07-25 10:44:19.340763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.862 [2024-07-25 10:44:19.340882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.862 [2024-07-25 10:44:19.340904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.862 [2024-07-25 10:44:19.340914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.862 [2024-07-25 10:44:19.340922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.862 [2024-07-25 10:44:19.340941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.862 qpair failed and we were unable to recover it. 00:29:15.862 [2024-07-25 10:44:19.350779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.862 [2024-07-25 10:44:19.350854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.862 [2024-07-25 10:44:19.350872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.862 [2024-07-25 10:44:19.350881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.862 [2024-07-25 10:44:19.350890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.862 [2024-07-25 10:44:19.350909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.862 qpair failed and we were unable to recover it. 00:29:15.862 [2024-07-25 10:44:19.360844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.862 [2024-07-25 10:44:19.360925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.862 [2024-07-25 10:44:19.360944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.862 [2024-07-25 10:44:19.360953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.862 [2024-07-25 10:44:19.360962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.862 [2024-07-25 10:44:19.360980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.862 qpair failed and we were unable to recover it. 00:29:15.862 [2024-07-25 10:44:19.370895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.862 [2024-07-25 10:44:19.370976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.862 [2024-07-25 10:44:19.370994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.862 [2024-07-25 10:44:19.371003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.862 [2024-07-25 10:44:19.371012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.862 [2024-07-25 10:44:19.371030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.862 qpair failed and we were unable to recover it. 00:29:15.862 [2024-07-25 10:44:19.380855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.862 [2024-07-25 10:44:19.380938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.862 [2024-07-25 10:44:19.380956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.862 [2024-07-25 10:44:19.380965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.862 [2024-07-25 10:44:19.380974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.862 [2024-07-25 10:44:19.380996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.862 qpair failed and we were unable to recover it. 00:29:15.862 [2024-07-25 10:44:19.390908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.862 [2024-07-25 10:44:19.390990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.862 [2024-07-25 10:44:19.391008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.862 [2024-07-25 10:44:19.391017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.862 [2024-07-25 10:44:19.391026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.862 [2024-07-25 10:44:19.391045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.862 qpair failed and we were unable to recover it. 00:29:15.863 [2024-07-25 10:44:19.400946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.863 [2024-07-25 10:44:19.401026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.863 [2024-07-25 10:44:19.401044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.863 [2024-07-25 10:44:19.401053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.863 [2024-07-25 10:44:19.401062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.863 [2024-07-25 10:44:19.401080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.863 qpair failed and we were unable to recover it. 00:29:15.863 [2024-07-25 10:44:19.410914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.863 [2024-07-25 10:44:19.411020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.863 [2024-07-25 10:44:19.411038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.863 [2024-07-25 10:44:19.411047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.863 [2024-07-25 10:44:19.411056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.863 [2024-07-25 10:44:19.411074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.863 qpair failed and we were unable to recover it. 00:29:15.863 [2024-07-25 10:44:19.420973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.863 [2024-07-25 10:44:19.421138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.863 [2024-07-25 10:44:19.421156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.863 [2024-07-25 10:44:19.421165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.863 [2024-07-25 10:44:19.421174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.863 [2024-07-25 10:44:19.421192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.863 qpair failed and we were unable to recover it. 00:29:15.863 [2024-07-25 10:44:19.431014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.863 [2024-07-25 10:44:19.431098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.863 [2024-07-25 10:44:19.431119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.863 [2024-07-25 10:44:19.431129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.863 [2024-07-25 10:44:19.431137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.863 [2024-07-25 10:44:19.431155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.863 qpair failed and we were unable to recover it. 00:29:15.863 [2024-07-25 10:44:19.441049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.863 [2024-07-25 10:44:19.441128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.863 [2024-07-25 10:44:19.441146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.863 [2024-07-25 10:44:19.441155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.863 [2024-07-25 10:44:19.441164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.863 [2024-07-25 10:44:19.441181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.863 qpair failed and we were unable to recover it. 00:29:15.863 [2024-07-25 10:44:19.451090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.863 [2024-07-25 10:44:19.451205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.863 [2024-07-25 10:44:19.451223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.863 [2024-07-25 10:44:19.451232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.863 [2024-07-25 10:44:19.451241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.863 [2024-07-25 10:44:19.451259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.863 qpair failed and we were unable to recover it. 00:29:15.863 [2024-07-25 10:44:19.461115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.863 [2024-07-25 10:44:19.461193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.863 [2024-07-25 10:44:19.461211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.863 [2024-07-25 10:44:19.461221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.863 [2024-07-25 10:44:19.461230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.863 [2024-07-25 10:44:19.461247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.863 qpair failed and we were unable to recover it. 00:29:15.863 [2024-07-25 10:44:19.471111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.863 [2024-07-25 10:44:19.471195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.863 [2024-07-25 10:44:19.471213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.863 [2024-07-25 10:44:19.471222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.863 [2024-07-25 10:44:19.471233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.863 [2024-07-25 10:44:19.471252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.863 qpair failed and we were unable to recover it. 00:29:15.863 [2024-07-25 10:44:19.481166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.863 [2024-07-25 10:44:19.481245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.863 [2024-07-25 10:44:19.481263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.863 [2024-07-25 10:44:19.481272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.863 [2024-07-25 10:44:19.481281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.863 [2024-07-25 10:44:19.481299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.863 qpair failed and we were unable to recover it. 00:29:15.863 [2024-07-25 10:44:19.491176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.863 [2024-07-25 10:44:19.491262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.863 [2024-07-25 10:44:19.491280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.863 [2024-07-25 10:44:19.491289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.863 [2024-07-25 10:44:19.491298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.863 [2024-07-25 10:44:19.491316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.863 qpair failed and we were unable to recover it. 00:29:15.863 [2024-07-25 10:44:19.501213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.863 [2024-07-25 10:44:19.501345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.863 [2024-07-25 10:44:19.501362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.863 [2024-07-25 10:44:19.501372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.863 [2024-07-25 10:44:19.501380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.863 [2024-07-25 10:44:19.501398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.863 qpair failed and we were unable to recover it. 00:29:15.863 [2024-07-25 10:44:19.511238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.863 [2024-07-25 10:44:19.511357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.863 [2024-07-25 10:44:19.511375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.863 [2024-07-25 10:44:19.511384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.863 [2024-07-25 10:44:19.511393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.863 [2024-07-25 10:44:19.511411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.864 qpair failed and we were unable to recover it. 00:29:15.864 [2024-07-25 10:44:19.521297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.864 [2024-07-25 10:44:19.521380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.864 [2024-07-25 10:44:19.521398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.864 [2024-07-25 10:44:19.521407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.864 [2024-07-25 10:44:19.521416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.864 [2024-07-25 10:44:19.521434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.864 qpair failed and we were unable to recover it. 00:29:15.864 [2024-07-25 10:44:19.531286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.864 [2024-07-25 10:44:19.531362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.864 [2024-07-25 10:44:19.531380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.864 [2024-07-25 10:44:19.531390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.864 [2024-07-25 10:44:19.531399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.864 [2024-07-25 10:44:19.531417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.864 qpair failed and we were unable to recover it. 00:29:15.864 [2024-07-25 10:44:19.541316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.864 [2024-07-25 10:44:19.541398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.864 [2024-07-25 10:44:19.541416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.864 [2024-07-25 10:44:19.541425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.864 [2024-07-25 10:44:19.541434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.864 [2024-07-25 10:44:19.541452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.864 qpair failed and we were unable to recover it. 00:29:15.864 [2024-07-25 10:44:19.551321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.864 [2024-07-25 10:44:19.551449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.864 [2024-07-25 10:44:19.551467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.864 [2024-07-25 10:44:19.551476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.864 [2024-07-25 10:44:19.551485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.864 [2024-07-25 10:44:19.551503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.864 qpair failed and we were unable to recover it. 00:29:15.864 [2024-07-25 10:44:19.561430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.864 [2024-07-25 10:44:19.561510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.864 [2024-07-25 10:44:19.561529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.864 [2024-07-25 10:44:19.561543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.864 [2024-07-25 10:44:19.561553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:15.864 [2024-07-25 10:44:19.561571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.864 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-25 10:44:19.571337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.125 [2024-07-25 10:44:19.571420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.125 [2024-07-25 10:44:19.571439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.125 [2024-07-25 10:44:19.571449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.125 [2024-07-25 10:44:19.571457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.125 [2024-07-25 10:44:19.571475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-25 10:44:19.581422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.125 [2024-07-25 10:44:19.581504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.125 [2024-07-25 10:44:19.581523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.125 [2024-07-25 10:44:19.581532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.125 [2024-07-25 10:44:19.581541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.125 [2024-07-25 10:44:19.581559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-25 10:44:19.591473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.125 [2024-07-25 10:44:19.591554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.125 [2024-07-25 10:44:19.591571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.125 [2024-07-25 10:44:19.591581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.125 [2024-07-25 10:44:19.591590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.125 [2024-07-25 10:44:19.591609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-25 10:44:19.601464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.125 [2024-07-25 10:44:19.601542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.125 [2024-07-25 10:44:19.601560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.125 [2024-07-25 10:44:19.601570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.125 [2024-07-25 10:44:19.601578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.125 [2024-07-25 10:44:19.601597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-25 10:44:19.611560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.125 [2024-07-25 10:44:19.611669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.125 [2024-07-25 10:44:19.611687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.125 [2024-07-25 10:44:19.611697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.125 [2024-07-25 10:44:19.611705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.125 [2024-07-25 10:44:19.611728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-25 10:44:19.621560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.125 [2024-07-25 10:44:19.621645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.125 [2024-07-25 10:44:19.621663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.125 [2024-07-25 10:44:19.621672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.125 [2024-07-25 10:44:19.621681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.125 [2024-07-25 10:44:19.621699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-25 10:44:19.631520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.125 [2024-07-25 10:44:19.631702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.126 [2024-07-25 10:44:19.631725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.126 [2024-07-25 10:44:19.631735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.126 [2024-07-25 10:44:19.631744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.126 [2024-07-25 10:44:19.631763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-25 10:44:19.641603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.126 [2024-07-25 10:44:19.641682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.126 [2024-07-25 10:44:19.641700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.126 [2024-07-25 10:44:19.641710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.126 [2024-07-25 10:44:19.641724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.126 [2024-07-25 10:44:19.641742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-25 10:44:19.651624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.126 [2024-07-25 10:44:19.651706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.126 [2024-07-25 10:44:19.651728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.126 [2024-07-25 10:44:19.651741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.126 [2024-07-25 10:44:19.651750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.126 [2024-07-25 10:44:19.651770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-25 10:44:19.661650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.126 [2024-07-25 10:44:19.661737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.126 [2024-07-25 10:44:19.661755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.126 [2024-07-25 10:44:19.661765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.126 [2024-07-25 10:44:19.661773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.126 [2024-07-25 10:44:19.661792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-25 10:44:19.671701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.126 [2024-07-25 10:44:19.671784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.126 [2024-07-25 10:44:19.671803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.126 [2024-07-25 10:44:19.671813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.126 [2024-07-25 10:44:19.671821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.126 [2024-07-25 10:44:19.671840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-25 10:44:19.681734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.126 [2024-07-25 10:44:19.681819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.126 [2024-07-25 10:44:19.681836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.126 [2024-07-25 10:44:19.681846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.126 [2024-07-25 10:44:19.681855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.126 [2024-07-25 10:44:19.681873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-25 10:44:19.691742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.126 [2024-07-25 10:44:19.691821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.126 [2024-07-25 10:44:19.691838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.126 [2024-07-25 10:44:19.691848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.126 [2024-07-25 10:44:19.691857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.126 [2024-07-25 10:44:19.691875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-25 10:44:19.701811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.126 [2024-07-25 10:44:19.701891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.126 [2024-07-25 10:44:19.701908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.126 [2024-07-25 10:44:19.701918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.126 [2024-07-25 10:44:19.701926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.126 [2024-07-25 10:44:19.701944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-25 10:44:19.711821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.126 [2024-07-25 10:44:19.711910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.126 [2024-07-25 10:44:19.711928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.126 [2024-07-25 10:44:19.711938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.126 [2024-07-25 10:44:19.711947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.126 [2024-07-25 10:44:19.711965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-25 10:44:19.721819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.126 [2024-07-25 10:44:19.721899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.126 [2024-07-25 10:44:19.721917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.126 [2024-07-25 10:44:19.721927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.126 [2024-07-25 10:44:19.721935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.126 [2024-07-25 10:44:19.721954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-25 10:44:19.731918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.126 [2024-07-25 10:44:19.731998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.126 [2024-07-25 10:44:19.732016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.126 [2024-07-25 10:44:19.732026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.126 [2024-07-25 10:44:19.732035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.126 [2024-07-25 10:44:19.732052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-25 10:44:19.741911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.126 [2024-07-25 10:44:19.742003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.126 [2024-07-25 10:44:19.742026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.126 [2024-07-25 10:44:19.742036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.126 [2024-07-25 10:44:19.742045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.127 [2024-07-25 10:44:19.742063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-25 10:44:19.751855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.127 [2024-07-25 10:44:19.751976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.127 [2024-07-25 10:44:19.751993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.127 [2024-07-25 10:44:19.752003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.127 [2024-07-25 10:44:19.752011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.127 [2024-07-25 10:44:19.752030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-25 10:44:19.761974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.127 [2024-07-25 10:44:19.762058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.127 [2024-07-25 10:44:19.762076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.127 [2024-07-25 10:44:19.762085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.127 [2024-07-25 10:44:19.762094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.127 [2024-07-25 10:44:19.762112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-25 10:44:19.772001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.127 [2024-07-25 10:44:19.772081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.127 [2024-07-25 10:44:19.772099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.127 [2024-07-25 10:44:19.772108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.127 [2024-07-25 10:44:19.772117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.127 [2024-07-25 10:44:19.772135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-25 10:44:19.782004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.127 [2024-07-25 10:44:19.782172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.127 [2024-07-25 10:44:19.782190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.127 [2024-07-25 10:44:19.782199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.127 [2024-07-25 10:44:19.782208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.127 [2024-07-25 10:44:19.782230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-25 10:44:19.792045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.127 [2024-07-25 10:44:19.792129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.127 [2024-07-25 10:44:19.792148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.127 [2024-07-25 10:44:19.792158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.127 [2024-07-25 10:44:19.792167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae08000b90 00:29:16.127 [2024-07-25 10:44:19.792186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-25 10:44:19.802051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.127 [2024-07-25 10:44:19.802151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.127 [2024-07-25 10:44:19.802181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.127 [2024-07-25 10:44:19.802197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.127 [2024-07-25 10:44:19.802210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.127 [2024-07-25 10:44:19.802238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-25 10:44:19.812142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.127 [2024-07-25 10:44:19.812224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.127 [2024-07-25 10:44:19.812243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.127 [2024-07-25 10:44:19.812253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.127 [2024-07-25 10:44:19.812262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.127 [2024-07-25 10:44:19.812280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-25 10:44:19.822138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.127 [2024-07-25 10:44:19.822305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.127 [2024-07-25 10:44:19.822324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.127 [2024-07-25 10:44:19.822334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.127 [2024-07-25 10:44:19.822343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.127 [2024-07-25 10:44:19.822361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.388 [2024-07-25 10:44:19.832107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.388 [2024-07-25 10:44:19.832186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.388 [2024-07-25 10:44:19.832208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.388 [2024-07-25 10:44:19.832218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.388 [2024-07-25 10:44:19.832226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.388 [2024-07-25 10:44:19.832244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-07-25 10:44:19.842212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.388 [2024-07-25 10:44:19.842331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.388 [2024-07-25 10:44:19.842349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.388 [2024-07-25 10:44:19.842359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.388 [2024-07-25 10:44:19.842368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.388 [2024-07-25 10:44:19.842385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-07-25 10:44:19.852175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.388 [2024-07-25 10:44:19.852256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.388 [2024-07-25 10:44:19.852275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.388 [2024-07-25 10:44:19.852285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.388 [2024-07-25 10:44:19.852294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.388 [2024-07-25 10:44:19.852312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-07-25 10:44:19.862230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.388 [2024-07-25 10:44:19.862388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.388 [2024-07-25 10:44:19.862408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.388 [2024-07-25 10:44:19.862418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.388 [2024-07-25 10:44:19.862427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.388 [2024-07-25 10:44:19.862445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-07-25 10:44:19.872254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.388 [2024-07-25 10:44:19.872339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.388 [2024-07-25 10:44:19.872358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.388 [2024-07-25 10:44:19.872369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.388 [2024-07-25 10:44:19.872378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.388 [2024-07-25 10:44:19.872400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-07-25 10:44:19.882240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.388 [2024-07-25 10:44:19.882402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.388 [2024-07-25 10:44:19.882422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.388 [2024-07-25 10:44:19.882432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.388 [2024-07-25 10:44:19.882441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.388 [2024-07-25 10:44:19.882460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-07-25 10:44:19.892363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.388 [2024-07-25 10:44:19.892442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.388 [2024-07-25 10:44:19.892461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.388 [2024-07-25 10:44:19.892471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.388 [2024-07-25 10:44:19.892480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.388 [2024-07-25 10:44:19.892498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-07-25 10:44:19.902408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.389 [2024-07-25 10:44:19.902538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.389 [2024-07-25 10:44:19.902556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.389 [2024-07-25 10:44:19.902566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.389 [2024-07-25 10:44:19.902575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.389 [2024-07-25 10:44:19.902592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-07-25 10:44:19.912400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.389 [2024-07-25 10:44:19.912482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.389 [2024-07-25 10:44:19.912501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.389 [2024-07-25 10:44:19.912510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.389 [2024-07-25 10:44:19.912519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.389 [2024-07-25 10:44:19.912537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-07-25 10:44:19.922396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.389 [2024-07-25 10:44:19.922475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.389 [2024-07-25 10:44:19.922496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.389 [2024-07-25 10:44:19.922507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.389 [2024-07-25 10:44:19.922518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.389 [2024-07-25 10:44:19.922538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-07-25 10:44:19.932435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.389 [2024-07-25 10:44:19.932515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.389 [2024-07-25 10:44:19.932534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.389 [2024-07-25 10:44:19.932544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.389 [2024-07-25 10:44:19.932553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.389 [2024-07-25 10:44:19.932571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-07-25 10:44:19.942485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.389 [2024-07-25 10:44:19.942653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.389 [2024-07-25 10:44:19.942671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.389 [2024-07-25 10:44:19.942682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.389 [2024-07-25 10:44:19.942690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.389 [2024-07-25 10:44:19.942709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-07-25 10:44:19.952427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.389 [2024-07-25 10:44:19.952511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.389 [2024-07-25 10:44:19.952529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.389 [2024-07-25 10:44:19.952539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.389 [2024-07-25 10:44:19.952548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.389 [2024-07-25 10:44:19.952566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-07-25 10:44:19.962474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.389 [2024-07-25 10:44:19.962556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.389 [2024-07-25 10:44:19.962575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.389 [2024-07-25 10:44:19.962585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.389 [2024-07-25 10:44:19.962597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.389 [2024-07-25 10:44:19.962615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-07-25 10:44:19.972537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.389 [2024-07-25 10:44:19.972616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.389 [2024-07-25 10:44:19.972635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.389 [2024-07-25 10:44:19.972645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.389 [2024-07-25 10:44:19.972654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.389 [2024-07-25 10:44:19.972672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-07-25 10:44:19.982546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.389 [2024-07-25 10:44:19.982630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.389 [2024-07-25 10:44:19.982648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.389 [2024-07-25 10:44:19.982658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.389 [2024-07-25 10:44:19.982666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.389 [2024-07-25 10:44:19.982684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-07-25 10:44:19.992547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.389 [2024-07-25 10:44:19.992637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.389 [2024-07-25 10:44:19.992655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.389 [2024-07-25 10:44:19.992665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.389 [2024-07-25 10:44:19.992674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.389 [2024-07-25 10:44:19.992691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-07-25 10:44:20.002673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.389 [2024-07-25 10:44:20.002807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.389 [2024-07-25 10:44:20.002826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.389 [2024-07-25 10:44:20.002836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.389 [2024-07-25 10:44:20.002845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.389 [2024-07-25 10:44:20.002862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-07-25 10:44:20.012703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.389 [2024-07-25 10:44:20.012798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.389 [2024-07-25 10:44:20.012818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.389 [2024-07-25 10:44:20.012828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.389 [2024-07-25 10:44:20.012837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.389 [2024-07-25 10:44:20.012856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-07-25 10:44:20.022743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.390 [2024-07-25 10:44:20.022831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.390 [2024-07-25 10:44:20.022850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.390 [2024-07-25 10:44:20.022861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.390 [2024-07-25 10:44:20.022870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.390 [2024-07-25 10:44:20.022889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-07-25 10:44:20.032780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.390 [2024-07-25 10:44:20.032874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.390 [2024-07-25 10:44:20.032892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.390 [2024-07-25 10:44:20.032903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.390 [2024-07-25 10:44:20.032911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.390 [2024-07-25 10:44:20.032929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-07-25 10:44:20.042801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.390 [2024-07-25 10:44:20.042891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.390 [2024-07-25 10:44:20.042909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.390 [2024-07-25 10:44:20.042919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.390 [2024-07-25 10:44:20.042928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.390 [2024-07-25 10:44:20.042946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-07-25 10:44:20.052743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.390 [2024-07-25 10:44:20.052836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.390 [2024-07-25 10:44:20.052854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.390 [2024-07-25 10:44:20.052864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.390 [2024-07-25 10:44:20.052879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.390 [2024-07-25 10:44:20.052897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-07-25 10:44:20.062826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.390 [2024-07-25 10:44:20.062912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.390 [2024-07-25 10:44:20.062931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.390 [2024-07-25 10:44:20.062941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.390 [2024-07-25 10:44:20.062950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.390 [2024-07-25 10:44:20.062967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-07-25 10:44:20.072759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.390 [2024-07-25 10:44:20.072845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.390 [2024-07-25 10:44:20.072863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.390 [2024-07-25 10:44:20.072873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.390 [2024-07-25 10:44:20.072885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.390 [2024-07-25 10:44:20.072902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-07-25 10:44:20.083219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.390 [2024-07-25 10:44:20.083321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.390 [2024-07-25 10:44:20.083340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.390 [2024-07-25 10:44:20.083351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.390 [2024-07-25 10:44:20.083360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.390 [2024-07-25 10:44:20.083380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.650 [2024-07-25 10:44:20.092857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.650 [2024-07-25 10:44:20.092938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.650 [2024-07-25 10:44:20.092959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.650 [2024-07-25 10:44:20.092970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.650 [2024-07-25 10:44:20.092979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.650 [2024-07-25 10:44:20.092997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.650 qpair failed and we were unable to recover it. 00:29:16.650 [2024-07-25 10:44:20.103012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.650 [2024-07-25 10:44:20.103103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.650 [2024-07-25 10:44:20.103122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.650 [2024-07-25 10:44:20.103132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.650 [2024-07-25 10:44:20.103140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.650 [2024-07-25 10:44:20.103158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.650 qpair failed and we were unable to recover it. 00:29:16.650 [2024-07-25 10:44:20.112947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.650 [2024-07-25 10:44:20.113036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.650 [2024-07-25 10:44:20.113054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.650 [2024-07-25 10:44:20.113064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.650 [2024-07-25 10:44:20.113073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.650 [2024-07-25 10:44:20.113091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.650 qpair failed and we were unable to recover it. 00:29:16.650 [2024-07-25 10:44:20.122935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.650 [2024-07-25 10:44:20.123026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.650 [2024-07-25 10:44:20.123045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.650 [2024-07-25 10:44:20.123055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.650 [2024-07-25 10:44:20.123063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.650 [2024-07-25 10:44:20.123081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.650 qpair failed and we were unable to recover it. 00:29:16.650 [2024-07-25 10:44:20.132939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.650 [2024-07-25 10:44:20.133020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.650 [2024-07-25 10:44:20.133038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.650 [2024-07-25 10:44:20.133048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.650 [2024-07-25 10:44:20.133057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.650 [2024-07-25 10:44:20.133074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.650 qpair failed and we were unable to recover it. 00:29:16.650 [2024-07-25 10:44:20.143062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.650 [2024-07-25 10:44:20.143147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.650 [2024-07-25 10:44:20.143166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.650 [2024-07-25 10:44:20.143179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.650 [2024-07-25 10:44:20.143188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.650 [2024-07-25 10:44:20.143206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.650 qpair failed and we were unable to recover it. 00:29:16.650 [2024-07-25 10:44:20.152994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.651 [2024-07-25 10:44:20.153080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.651 [2024-07-25 10:44:20.153098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.651 [2024-07-25 10:44:20.153108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.651 [2024-07-25 10:44:20.153117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.651 [2024-07-25 10:44:20.153134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.651 qpair failed and we were unable to recover it. 00:29:16.651 [2024-07-25 10:44:20.163102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.651 [2024-07-25 10:44:20.163184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.651 [2024-07-25 10:44:20.163202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.651 [2024-07-25 10:44:20.163212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.651 [2024-07-25 10:44:20.163221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.651 [2024-07-25 10:44:20.163239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.651 qpair failed and we were unable to recover it. 00:29:16.651 [2024-07-25 10:44:20.173134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.651 [2024-07-25 10:44:20.173267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.651 [2024-07-25 10:44:20.173286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.651 [2024-07-25 10:44:20.173296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.651 [2024-07-25 10:44:20.173305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.651 [2024-07-25 10:44:20.173322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.651 qpair failed and we were unable to recover it. 00:29:16.651 [2024-07-25 10:44:20.183116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.651 [2024-07-25 10:44:20.183201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.651 [2024-07-25 10:44:20.183219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.651 [2024-07-25 10:44:20.183229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.651 [2024-07-25 10:44:20.183237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.651 [2024-07-25 10:44:20.183255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.651 qpair failed and we were unable to recover it. 00:29:16.651 [2024-07-25 10:44:20.193168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.651 [2024-07-25 10:44:20.193264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.651 [2024-07-25 10:44:20.193282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.651 [2024-07-25 10:44:20.193292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.651 [2024-07-25 10:44:20.193301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.651 [2024-07-25 10:44:20.193318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.651 qpair failed and we were unable to recover it. 00:29:16.651 [2024-07-25 10:44:20.203239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.651 [2024-07-25 10:44:20.203347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.651 [2024-07-25 10:44:20.203366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.651 [2024-07-25 10:44:20.203376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.651 [2024-07-25 10:44:20.203384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.651 [2024-07-25 10:44:20.203402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.651 qpair failed and we were unable to recover it. 00:29:16.651 [2024-07-25 10:44:20.213235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.651 [2024-07-25 10:44:20.213324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.651 [2024-07-25 10:44:20.213342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.651 [2024-07-25 10:44:20.213351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.651 [2024-07-25 10:44:20.213360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.651 [2024-07-25 10:44:20.213378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.651 qpair failed and we were unable to recover it. 00:29:16.651 [2024-07-25 10:44:20.223276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.651 [2024-07-25 10:44:20.223389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.651 [2024-07-25 10:44:20.223407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.651 [2024-07-25 10:44:20.223417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.651 [2024-07-25 10:44:20.223425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.651 [2024-07-25 10:44:20.223442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.651 qpair failed and we were unable to recover it. 00:29:16.651 [2024-07-25 10:44:20.233273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.651 [2024-07-25 10:44:20.233359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.651 [2024-07-25 10:44:20.233377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.651 [2024-07-25 10:44:20.233390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.651 [2024-07-25 10:44:20.233399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.651 [2024-07-25 10:44:20.233417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.651 qpair failed and we were unable to recover it. 00:29:16.651 [2024-07-25 10:44:20.243322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.651 [2024-07-25 10:44:20.243402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.651 [2024-07-25 10:44:20.243420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.651 [2024-07-25 10:44:20.243430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.651 [2024-07-25 10:44:20.243439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.651 [2024-07-25 10:44:20.243456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.651 qpair failed and we were unable to recover it. 00:29:16.651 [2024-07-25 10:44:20.253362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.651 [2024-07-25 10:44:20.253481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.651 [2024-07-25 10:44:20.253500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.651 [2024-07-25 10:44:20.253509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.651 [2024-07-25 10:44:20.253518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.651 [2024-07-25 10:44:20.253535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.651 qpair failed and we were unable to recover it. 00:29:16.651 [2024-07-25 10:44:20.263431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.651 [2024-07-25 10:44:20.263517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.651 [2024-07-25 10:44:20.263536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.652 [2024-07-25 10:44:20.263546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.652 [2024-07-25 10:44:20.263555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.652 [2024-07-25 10:44:20.263572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.652 qpair failed and we were unable to recover it. 00:29:16.652 [2024-07-25 10:44:20.273428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.652 [2024-07-25 10:44:20.273518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.652 [2024-07-25 10:44:20.273536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.652 [2024-07-25 10:44:20.273546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.652 [2024-07-25 10:44:20.273554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.652 [2024-07-25 10:44:20.273572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.652 qpair failed and we were unable to recover it. 00:29:16.652 [2024-07-25 10:44:20.283468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.652 [2024-07-25 10:44:20.283550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.652 [2024-07-25 10:44:20.283568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.652 [2024-07-25 10:44:20.283578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.652 [2024-07-25 10:44:20.283587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.652 [2024-07-25 10:44:20.283604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.652 qpair failed and we were unable to recover it. 00:29:16.652 [2024-07-25 10:44:20.293499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.652 [2024-07-25 10:44:20.293613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.652 [2024-07-25 10:44:20.293633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.652 [2024-07-25 10:44:20.293643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.652 [2024-07-25 10:44:20.293651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.652 [2024-07-25 10:44:20.293669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.652 qpair failed and we were unable to recover it. 00:29:16.652 [2024-07-25 10:44:20.303501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.652 [2024-07-25 10:44:20.303586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.652 [2024-07-25 10:44:20.303604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.652 [2024-07-25 10:44:20.303614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.652 [2024-07-25 10:44:20.303623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.652 [2024-07-25 10:44:20.303640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.652 qpair failed and we were unable to recover it. 00:29:16.652 [2024-07-25 10:44:20.313530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.652 [2024-07-25 10:44:20.313616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.652 [2024-07-25 10:44:20.313634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.652 [2024-07-25 10:44:20.313644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.652 [2024-07-25 10:44:20.313652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.652 [2024-07-25 10:44:20.313670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.652 qpair failed and we were unable to recover it. 00:29:16.652 [2024-07-25 10:44:20.323575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.652 [2024-07-25 10:44:20.323689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.652 [2024-07-25 10:44:20.323707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.652 [2024-07-25 10:44:20.323723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.652 [2024-07-25 10:44:20.323731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.652 [2024-07-25 10:44:20.323748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.652 qpair failed and we were unable to recover it. 00:29:16.652 [2024-07-25 10:44:20.333592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.652 [2024-07-25 10:44:20.333720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.652 [2024-07-25 10:44:20.333741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.652 [2024-07-25 10:44:20.333751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.652 [2024-07-25 10:44:20.333760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.652 [2024-07-25 10:44:20.333778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.652 qpair failed and we were unable to recover it. 00:29:16.652 [2024-07-25 10:44:20.343626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.652 [2024-07-25 10:44:20.343717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.652 [2024-07-25 10:44:20.343736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.652 [2024-07-25 10:44:20.343745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.652 [2024-07-25 10:44:20.343754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.652 [2024-07-25 10:44:20.343771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.652 qpair failed and we were unable to recover it. 00:29:16.913 [2024-07-25 10:44:20.353643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.913 [2024-07-25 10:44:20.353732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.913 [2024-07-25 10:44:20.353751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.913 [2024-07-25 10:44:20.353761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.913 [2024-07-25 10:44:20.353770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.913 [2024-07-25 10:44:20.353788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.913 qpair failed and we were unable to recover it. 00:29:16.913 [2024-07-25 10:44:20.363675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.913 [2024-07-25 10:44:20.363759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.913 [2024-07-25 10:44:20.363778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.913 [2024-07-25 10:44:20.363788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.913 [2024-07-25 10:44:20.363796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.913 [2024-07-25 10:44:20.363814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.913 qpair failed and we were unable to recover it. 00:29:16.913 [2024-07-25 10:44:20.373672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.913 [2024-07-25 10:44:20.373755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.913 [2024-07-25 10:44:20.373773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.913 [2024-07-25 10:44:20.373783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.913 [2024-07-25 10:44:20.373792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.913 [2024-07-25 10:44:20.373809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.913 qpair failed and we were unable to recover it. 00:29:16.913 [2024-07-25 10:44:20.383733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.913 [2024-07-25 10:44:20.383834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.913 [2024-07-25 10:44:20.383852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.913 [2024-07-25 10:44:20.383862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.913 [2024-07-25 10:44:20.383870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.913 [2024-07-25 10:44:20.383888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.913 qpair failed and we were unable to recover it. 00:29:16.913 [2024-07-25 10:44:20.393742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.913 [2024-07-25 10:44:20.393824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.913 [2024-07-25 10:44:20.393843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.913 [2024-07-25 10:44:20.393852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.913 [2024-07-25 10:44:20.393861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.913 [2024-07-25 10:44:20.393878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.913 qpair failed and we were unable to recover it. 00:29:16.913 [2024-07-25 10:44:20.403794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.913 [2024-07-25 10:44:20.403876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.913 [2024-07-25 10:44:20.403894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.913 [2024-07-25 10:44:20.403904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.913 [2024-07-25 10:44:20.403913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.913 [2024-07-25 10:44:20.403930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.913 qpair failed and we were unable to recover it. 00:29:16.913 [2024-07-25 10:44:20.413814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.913 [2024-07-25 10:44:20.413894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.913 [2024-07-25 10:44:20.413915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.913 [2024-07-25 10:44:20.413925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.913 [2024-07-25 10:44:20.413934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.913 [2024-07-25 10:44:20.413951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.913 qpair failed and we were unable to recover it. 00:29:16.913 [2024-07-25 10:44:20.423884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.913 [2024-07-25 10:44:20.424044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.913 [2024-07-25 10:44:20.424063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.913 [2024-07-25 10:44:20.424072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.913 [2024-07-25 10:44:20.424081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.913 [2024-07-25 10:44:20.424099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.913 qpair failed and we were unable to recover it. 00:29:16.913 [2024-07-25 10:44:20.433880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.913 [2024-07-25 10:44:20.433963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.913 [2024-07-25 10:44:20.433981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.913 [2024-07-25 10:44:20.433991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.913 [2024-07-25 10:44:20.434000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.913 [2024-07-25 10:44:20.434017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.913 qpair failed and we were unable to recover it. 00:29:16.913 [2024-07-25 10:44:20.443882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.914 [2024-07-25 10:44:20.443960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.914 [2024-07-25 10:44:20.443979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.914 [2024-07-25 10:44:20.443988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.914 [2024-07-25 10:44:20.443997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.914 [2024-07-25 10:44:20.444014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.914 qpair failed and we were unable to recover it. 00:29:16.914 [2024-07-25 10:44:20.453937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.914 [2024-07-25 10:44:20.454014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.914 [2024-07-25 10:44:20.454032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.914 [2024-07-25 10:44:20.454041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.914 [2024-07-25 10:44:20.454050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.914 [2024-07-25 10:44:20.454068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.914 qpair failed and we were unable to recover it. 00:29:16.914 [2024-07-25 10:44:20.463992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.914 [2024-07-25 10:44:20.464122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.914 [2024-07-25 10:44:20.464140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.914 [2024-07-25 10:44:20.464150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.914 [2024-07-25 10:44:20.464159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.914 [2024-07-25 10:44:20.464176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.914 qpair failed and we were unable to recover it. 00:29:16.914 [2024-07-25 10:44:20.473964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.914 [2024-07-25 10:44:20.474049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.914 [2024-07-25 10:44:20.474067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.914 [2024-07-25 10:44:20.474077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.914 [2024-07-25 10:44:20.474085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.914 [2024-07-25 10:44:20.474102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.914 qpair failed and we were unable to recover it. 00:29:16.914 [2024-07-25 10:44:20.484025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.914 [2024-07-25 10:44:20.484104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.914 [2024-07-25 10:44:20.484123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.914 [2024-07-25 10:44:20.484132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.914 [2024-07-25 10:44:20.484141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.914 [2024-07-25 10:44:20.484158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.914 qpair failed and we were unable to recover it. 00:29:16.914 [2024-07-25 10:44:20.494022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.914 [2024-07-25 10:44:20.494131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.914 [2024-07-25 10:44:20.494149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.914 [2024-07-25 10:44:20.494159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.914 [2024-07-25 10:44:20.494168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.914 [2024-07-25 10:44:20.494185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.914 qpair failed and we were unable to recover it. 00:29:16.914 [2024-07-25 10:44:20.504058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.914 [2024-07-25 10:44:20.504141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.914 [2024-07-25 10:44:20.504163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.914 [2024-07-25 10:44:20.504174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.914 [2024-07-25 10:44:20.504182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.914 [2024-07-25 10:44:20.504200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.914 qpair failed and we were unable to recover it. 00:29:16.914 [2024-07-25 10:44:20.514112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.914 [2024-07-25 10:44:20.514193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.914 [2024-07-25 10:44:20.514211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.914 [2024-07-25 10:44:20.514221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.914 [2024-07-25 10:44:20.514230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.914 [2024-07-25 10:44:20.514247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.914 qpair failed and we were unable to recover it. 00:29:16.914 [2024-07-25 10:44:20.524183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.914 [2024-07-25 10:44:20.524294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.914 [2024-07-25 10:44:20.524312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.914 [2024-07-25 10:44:20.524322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.914 [2024-07-25 10:44:20.524330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.914 [2024-07-25 10:44:20.524348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.914 qpair failed and we were unable to recover it. 00:29:16.914 [2024-07-25 10:44:20.534099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.914 [2024-07-25 10:44:20.534194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.914 [2024-07-25 10:44:20.534213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.914 [2024-07-25 10:44:20.534222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.914 [2024-07-25 10:44:20.534231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.914 [2024-07-25 10:44:20.534248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.914 qpair failed and we were unable to recover it. 00:29:16.914 [2024-07-25 10:44:20.544161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.914 [2024-07-25 10:44:20.544240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.915 [2024-07-25 10:44:20.544259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.915 [2024-07-25 10:44:20.544268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.915 [2024-07-25 10:44:20.544277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.915 [2024-07-25 10:44:20.544299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.915 qpair failed and we were unable to recover it. 00:29:16.915 [2024-07-25 10:44:20.554224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.915 [2024-07-25 10:44:20.554311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.915 [2024-07-25 10:44:20.554329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.915 [2024-07-25 10:44:20.554339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.915 [2024-07-25 10:44:20.554347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.915 [2024-07-25 10:44:20.554365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.915 qpair failed and we were unable to recover it. 00:29:16.915 [2024-07-25 10:44:20.564215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.915 [2024-07-25 10:44:20.564295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.915 [2024-07-25 10:44:20.564314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.915 [2024-07-25 10:44:20.564323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.915 [2024-07-25 10:44:20.564332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.915 [2024-07-25 10:44:20.564349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.915 qpair failed and we were unable to recover it. 00:29:16.915 [2024-07-25 10:44:20.574251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.915 [2024-07-25 10:44:20.574326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.915 [2024-07-25 10:44:20.574344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.915 [2024-07-25 10:44:20.574354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.915 [2024-07-25 10:44:20.574362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.915 [2024-07-25 10:44:20.574379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.915 qpair failed and we were unable to recover it. 00:29:16.915 [2024-07-25 10:44:20.584317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.915 [2024-07-25 10:44:20.584396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.915 [2024-07-25 10:44:20.584414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.915 [2024-07-25 10:44:20.584423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.915 [2024-07-25 10:44:20.584432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.915 [2024-07-25 10:44:20.584449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.915 qpair failed and we were unable to recover it. 00:29:16.915 [2024-07-25 10:44:20.594310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.915 [2024-07-25 10:44:20.594390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.915 [2024-07-25 10:44:20.594411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.915 [2024-07-25 10:44:20.594421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.915 [2024-07-25 10:44:20.594429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.915 [2024-07-25 10:44:20.594446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.915 qpair failed and we were unable to recover it. 00:29:16.915 [2024-07-25 10:44:20.604340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.915 [2024-07-25 10:44:20.604424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.915 [2024-07-25 10:44:20.604442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.915 [2024-07-25 10:44:20.604451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.915 [2024-07-25 10:44:20.604460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.915 [2024-07-25 10:44:20.604478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.915 qpair failed and we were unable to recover it. 00:29:16.915 [2024-07-25 10:44:20.614384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.915 [2024-07-25 10:44:20.614469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.915 [2024-07-25 10:44:20.614488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.915 [2024-07-25 10:44:20.614497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.915 [2024-07-25 10:44:20.614506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:16.915 [2024-07-25 10:44:20.614523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.915 qpair failed and we were unable to recover it. 00:29:17.175 [2024-07-25 10:44:20.624400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.175 [2024-07-25 10:44:20.624566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.175 [2024-07-25 10:44:20.624584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.175 [2024-07-25 10:44:20.624594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.175 [2024-07-25 10:44:20.624603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.175 [2024-07-25 10:44:20.624621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.175 qpair failed and we were unable to recover it. 00:29:17.175 [2024-07-25 10:44:20.634451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.175 [2024-07-25 10:44:20.634533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.175 [2024-07-25 10:44:20.634551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.175 [2024-07-25 10:44:20.634560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.175 [2024-07-25 10:44:20.634569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.175 [2024-07-25 10:44:20.634590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.175 qpair failed and we were unable to recover it. 00:29:17.175 [2024-07-25 10:44:20.644469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.175 [2024-07-25 10:44:20.644547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.175 [2024-07-25 10:44:20.644566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.175 [2024-07-25 10:44:20.644575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.176 [2024-07-25 10:44:20.644584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.176 [2024-07-25 10:44:20.644600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.176 qpair failed and we were unable to recover it. 00:29:17.176 [2024-07-25 10:44:20.654498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.176 [2024-07-25 10:44:20.654575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.176 [2024-07-25 10:44:20.654593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.176 [2024-07-25 10:44:20.654602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.176 [2024-07-25 10:44:20.654611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.176 [2024-07-25 10:44:20.654628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.176 qpair failed and we were unable to recover it. 00:29:17.176 [2024-07-25 10:44:20.664566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.176 [2024-07-25 10:44:20.664673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.176 [2024-07-25 10:44:20.664692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.176 [2024-07-25 10:44:20.664701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.176 [2024-07-25 10:44:20.664710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.176 [2024-07-25 10:44:20.664732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.176 qpair failed and we were unable to recover it. 00:29:17.176 [2024-07-25 10:44:20.674600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.176 [2024-07-25 10:44:20.674712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.176 [2024-07-25 10:44:20.674733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.176 [2024-07-25 10:44:20.674742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.176 [2024-07-25 10:44:20.674751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.176 [2024-07-25 10:44:20.674769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.176 qpair failed and we were unable to recover it. 00:29:17.176 [2024-07-25 10:44:20.684597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.176 [2024-07-25 10:44:20.684676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.176 [2024-07-25 10:44:20.684697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.176 [2024-07-25 10:44:20.684707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.176 [2024-07-25 10:44:20.684719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.176 [2024-07-25 10:44:20.684738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.176 qpair failed and we were unable to recover it. 00:29:17.176 [2024-07-25 10:44:20.694623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.176 [2024-07-25 10:44:20.694702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.176 [2024-07-25 10:44:20.694723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.176 [2024-07-25 10:44:20.694733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.176 [2024-07-25 10:44:20.694742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.176 [2024-07-25 10:44:20.694759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.176 qpair failed and we were unable to recover it. 00:29:17.176 [2024-07-25 10:44:20.704672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.176 [2024-07-25 10:44:20.704756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.176 [2024-07-25 10:44:20.704775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.176 [2024-07-25 10:44:20.704784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.176 [2024-07-25 10:44:20.704793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.176 [2024-07-25 10:44:20.704810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.176 qpair failed and we were unable to recover it. 00:29:17.176 [2024-07-25 10:44:20.714680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.176 [2024-07-25 10:44:20.714811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.176 [2024-07-25 10:44:20.714829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.176 [2024-07-25 10:44:20.714839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.176 [2024-07-25 10:44:20.714849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.176 [2024-07-25 10:44:20.714866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.176 qpair failed and we were unable to recover it. 00:29:17.176 [2024-07-25 10:44:20.724696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.176 [2024-07-25 10:44:20.724789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.176 [2024-07-25 10:44:20.724807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.176 [2024-07-25 10:44:20.724817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.176 [2024-07-25 10:44:20.724828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.176 [2024-07-25 10:44:20.724846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.176 qpair failed and we were unable to recover it. 00:29:17.176 [2024-07-25 10:44:20.734737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.176 [2024-07-25 10:44:20.734820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.176 [2024-07-25 10:44:20.734838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.176 [2024-07-25 10:44:20.734848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.176 [2024-07-25 10:44:20.734857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.176 [2024-07-25 10:44:20.734874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.176 qpair failed and we were unable to recover it. 00:29:17.176 [2024-07-25 10:44:20.744787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.176 [2024-07-25 10:44:20.744956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.176 [2024-07-25 10:44:20.744974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.177 [2024-07-25 10:44:20.744984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.177 [2024-07-25 10:44:20.744993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.177 [2024-07-25 10:44:20.745011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.177 qpair failed and we were unable to recover it. 00:29:17.177 [2024-07-25 10:44:20.754764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.177 [2024-07-25 10:44:20.754848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.177 [2024-07-25 10:44:20.754867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.177 [2024-07-25 10:44:20.754876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.177 [2024-07-25 10:44:20.754885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.177 [2024-07-25 10:44:20.754903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.177 qpair failed and we were unable to recover it. 00:29:17.177 [2024-07-25 10:44:20.764818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.177 [2024-07-25 10:44:20.764893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.177 [2024-07-25 10:44:20.764911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.177 [2024-07-25 10:44:20.764921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.177 [2024-07-25 10:44:20.764930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.177 [2024-07-25 10:44:20.764947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.177 qpair failed and we were unable to recover it. 00:29:17.177 [2024-07-25 10:44:20.774893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.177 [2024-07-25 10:44:20.774972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.177 [2024-07-25 10:44:20.774990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.177 [2024-07-25 10:44:20.775000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.177 [2024-07-25 10:44:20.775008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.177 [2024-07-25 10:44:20.775025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.177 qpair failed and we were unable to recover it. 00:29:17.177 [2024-07-25 10:44:20.784885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.177 [2024-07-25 10:44:20.784968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.177 [2024-07-25 10:44:20.784986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.177 [2024-07-25 10:44:20.784996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.177 [2024-07-25 10:44:20.785004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.177 [2024-07-25 10:44:20.785022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.177 qpair failed and we were unable to recover it. 00:29:17.177 [2024-07-25 10:44:20.794829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.177 [2024-07-25 10:44:20.794915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.177 [2024-07-25 10:44:20.794933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.177 [2024-07-25 10:44:20.794942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.177 [2024-07-25 10:44:20.794951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.177 [2024-07-25 10:44:20.794968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.177 qpair failed and we were unable to recover it. 00:29:17.177 [2024-07-25 10:44:20.804920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.177 [2024-07-25 10:44:20.805000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.177 [2024-07-25 10:44:20.805018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.177 [2024-07-25 10:44:20.805029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.177 [2024-07-25 10:44:20.805038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.177 [2024-07-25 10:44:20.805055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.177 qpair failed and we were unable to recover it. 00:29:17.177 [2024-07-25 10:44:20.814940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.177 [2024-07-25 10:44:20.815048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.177 [2024-07-25 10:44:20.815067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.177 [2024-07-25 10:44:20.815076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.177 [2024-07-25 10:44:20.815092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.177 [2024-07-25 10:44:20.815109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.177 qpair failed and we were unable to recover it. 00:29:17.177 [2024-07-25 10:44:20.824985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.177 [2024-07-25 10:44:20.825070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.177 [2024-07-25 10:44:20.825088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.177 [2024-07-25 10:44:20.825098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.177 [2024-07-25 10:44:20.825107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.177 [2024-07-25 10:44:20.825124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.177 qpair failed and we were unable to recover it. 00:29:17.177 [2024-07-25 10:44:20.834994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.177 [2024-07-25 10:44:20.835073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.177 [2024-07-25 10:44:20.835091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.177 [2024-07-25 10:44:20.835100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.177 [2024-07-25 10:44:20.835109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.177 [2024-07-25 10:44:20.835126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.177 qpair failed and we were unable to recover it. 00:29:17.177 [2024-07-25 10:44:20.845058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.177 [2024-07-25 10:44:20.845172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.177 [2024-07-25 10:44:20.845190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.178 [2024-07-25 10:44:20.845200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.178 [2024-07-25 10:44:20.845208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.178 [2024-07-25 10:44:20.845225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.178 qpair failed and we were unable to recover it. 00:29:17.178 [2024-07-25 10:44:20.855077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.178 [2024-07-25 10:44:20.855195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.178 [2024-07-25 10:44:20.855213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.178 [2024-07-25 10:44:20.855223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.178 [2024-07-25 10:44:20.855232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.178 [2024-07-25 10:44:20.855249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.178 qpair failed and we were unable to recover it. 00:29:17.178 [2024-07-25 10:44:20.865129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.178 [2024-07-25 10:44:20.865239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.178 [2024-07-25 10:44:20.865257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.178 [2024-07-25 10:44:20.865267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.178 [2024-07-25 10:44:20.865275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.178 [2024-07-25 10:44:20.865292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.178 qpair failed and we were unable to recover it. 00:29:17.178 [2024-07-25 10:44:20.875119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.178 [2024-07-25 10:44:20.875197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.178 [2024-07-25 10:44:20.875215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.178 [2024-07-25 10:44:20.875225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.178 [2024-07-25 10:44:20.875233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.178 [2024-07-25 10:44:20.875250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.178 qpair failed and we were unable to recover it. 00:29:17.438 [2024-07-25 10:44:20.885161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.438 [2024-07-25 10:44:20.885237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.438 [2024-07-25 10:44:20.885255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.438 [2024-07-25 10:44:20.885264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.438 [2024-07-25 10:44:20.885273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.438 [2024-07-25 10:44:20.885290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.438 qpair failed and we were unable to recover it. 00:29:17.438 [2024-07-25 10:44:20.895149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.438 [2024-07-25 10:44:20.895234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.438 [2024-07-25 10:44:20.895252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.438 [2024-07-25 10:44:20.895262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.438 [2024-07-25 10:44:20.895272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.438 [2024-07-25 10:44:20.895289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.438 qpair failed and we were unable to recover it. 00:29:17.438 [2024-07-25 10:44:20.905218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.438 [2024-07-25 10:44:20.905344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.438 [2024-07-25 10:44:20.905361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.438 [2024-07-25 10:44:20.905371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.438 [2024-07-25 10:44:20.905383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.438 [2024-07-25 10:44:20.905400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.438 qpair failed and we were unable to recover it. 00:29:17.438 [2024-07-25 10:44:20.915210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.438 [2024-07-25 10:44:20.915292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.438 [2024-07-25 10:44:20.915311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.438 [2024-07-25 10:44:20.915321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.438 [2024-07-25 10:44:20.915330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.438 [2024-07-25 10:44:20.915347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.438 qpair failed and we were unable to recover it. 00:29:17.438 [2024-07-25 10:44:20.925249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.438 [2024-07-25 10:44:20.925328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.438 [2024-07-25 10:44:20.925348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.439 [2024-07-25 10:44:20.925358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.439 [2024-07-25 10:44:20.925366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.439 [2024-07-25 10:44:20.925384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.439 qpair failed and we were unable to recover it. 00:29:17.439 [2024-07-25 10:44:20.935298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.439 [2024-07-25 10:44:20.935409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.439 [2024-07-25 10:44:20.935427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.439 [2024-07-25 10:44:20.935437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.439 [2024-07-25 10:44:20.935446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.439 [2024-07-25 10:44:20.935463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.439 qpair failed and we were unable to recover it. 00:29:17.439 [2024-07-25 10:44:20.945290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.439 [2024-07-25 10:44:20.945367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.439 [2024-07-25 10:44:20.945386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.439 [2024-07-25 10:44:20.945396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.439 [2024-07-25 10:44:20.945404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.439 [2024-07-25 10:44:20.945421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.439 qpair failed and we were unable to recover it. 00:29:17.439 [2024-07-25 10:44:20.955348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.439 [2024-07-25 10:44:20.955427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.439 [2024-07-25 10:44:20.955446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.439 [2024-07-25 10:44:20.955456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.439 [2024-07-25 10:44:20.955465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.439 [2024-07-25 10:44:20.955483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.439 qpair failed and we were unable to recover it. 00:29:17.439 [2024-07-25 10:44:20.965360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.439 [2024-07-25 10:44:20.965455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.439 [2024-07-25 10:44:20.965474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.439 [2024-07-25 10:44:20.965484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.439 [2024-07-25 10:44:20.965493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.439 [2024-07-25 10:44:20.965510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.439 qpair failed and we were unable to recover it. 00:29:17.439 [2024-07-25 10:44:20.975342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.439 [2024-07-25 10:44:20.975437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.439 [2024-07-25 10:44:20.975455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.439 [2024-07-25 10:44:20.975465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.439 [2024-07-25 10:44:20.975474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.439 [2024-07-25 10:44:20.975491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.439 qpair failed and we were unable to recover it. 00:29:17.439 [2024-07-25 10:44:20.985441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.439 [2024-07-25 10:44:20.985554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.439 [2024-07-25 10:44:20.985572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.439 [2024-07-25 10:44:20.985582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.439 [2024-07-25 10:44:20.985591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.439 [2024-07-25 10:44:20.985608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.439 qpair failed and we were unable to recover it. 00:29:17.439 [2024-07-25 10:44:20.995461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.439 [2024-07-25 10:44:20.995541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.439 [2024-07-25 10:44:20.995560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.439 [2024-07-25 10:44:20.995572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.439 [2024-07-25 10:44:20.995581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.439 [2024-07-25 10:44:20.995600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.439 qpair failed and we were unable to recover it. 00:29:17.439 [2024-07-25 10:44:21.005501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.439 [2024-07-25 10:44:21.005622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.439 [2024-07-25 10:44:21.005640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.439 [2024-07-25 10:44:21.005650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.439 [2024-07-25 10:44:21.005659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.439 [2024-07-25 10:44:21.005676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.439 qpair failed and we were unable to recover it. 00:29:17.439 [2024-07-25 10:44:21.015576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.439 [2024-07-25 10:44:21.015658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.439 [2024-07-25 10:44:21.015676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.440 [2024-07-25 10:44:21.015686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.440 [2024-07-25 10:44:21.015695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.440 [2024-07-25 10:44:21.015712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.440 qpair failed and we were unable to recover it. 00:29:17.440 [2024-07-25 10:44:21.025535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.440 [2024-07-25 10:44:21.025616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.440 [2024-07-25 10:44:21.025634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.440 [2024-07-25 10:44:21.025643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.440 [2024-07-25 10:44:21.025652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.440 [2024-07-25 10:44:21.025669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.440 qpair failed and we were unable to recover it. 00:29:17.440 [2024-07-25 10:44:21.035564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.440 [2024-07-25 10:44:21.035644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.440 [2024-07-25 10:44:21.035663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.440 [2024-07-25 10:44:21.035673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.440 [2024-07-25 10:44:21.035682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.440 [2024-07-25 10:44:21.035700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.440 qpair failed and we were unable to recover it. 00:29:17.440 [2024-07-25 10:44:21.045597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.440 [2024-07-25 10:44:21.045675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.440 [2024-07-25 10:44:21.045693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.440 [2024-07-25 10:44:21.045703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.440 [2024-07-25 10:44:21.045712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.440 [2024-07-25 10:44:21.045735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.440 qpair failed and we were unable to recover it. 00:29:17.440 [2024-07-25 10:44:21.055640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.440 [2024-07-25 10:44:21.055722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.440 [2024-07-25 10:44:21.055741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.440 [2024-07-25 10:44:21.055751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.440 [2024-07-25 10:44:21.055760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.440 [2024-07-25 10:44:21.055777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.440 qpair failed and we were unable to recover it. 00:29:17.440 [2024-07-25 10:44:21.065711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.440 [2024-07-25 10:44:21.065824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.440 [2024-07-25 10:44:21.065843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.440 [2024-07-25 10:44:21.065853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.440 [2024-07-25 10:44:21.065862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.440 [2024-07-25 10:44:21.065880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.440 qpair failed and we were unable to recover it. 00:29:17.440 [2024-07-25 10:44:21.075725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.440 [2024-07-25 10:44:21.075804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.440 [2024-07-25 10:44:21.075822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.440 [2024-07-25 10:44:21.075832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.440 [2024-07-25 10:44:21.075841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.440 [2024-07-25 10:44:21.075857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.440 qpair failed and we were unable to recover it. 00:29:17.440 [2024-07-25 10:44:21.085767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.440 [2024-07-25 10:44:21.085846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.440 [2024-07-25 10:44:21.085865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.440 [2024-07-25 10:44:21.085877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.440 [2024-07-25 10:44:21.085888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.440 [2024-07-25 10:44:21.085906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.440 qpair failed and we were unable to recover it. 00:29:17.440 [2024-07-25 10:44:21.095762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.440 [2024-07-25 10:44:21.095847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.440 [2024-07-25 10:44:21.095865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.440 [2024-07-25 10:44:21.095875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.440 [2024-07-25 10:44:21.095883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.440 [2024-07-25 10:44:21.095901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.440 qpair failed and we were unable to recover it. 00:29:17.440 [2024-07-25 10:44:21.105800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.440 [2024-07-25 10:44:21.105883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.440 [2024-07-25 10:44:21.105902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.440 [2024-07-25 10:44:21.105912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.440 [2024-07-25 10:44:21.105920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.440 [2024-07-25 10:44:21.105938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.440 qpair failed and we were unable to recover it. 00:29:17.440 [2024-07-25 10:44:21.115818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.440 [2024-07-25 10:44:21.116014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.440 [2024-07-25 10:44:21.116033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.440 [2024-07-25 10:44:21.116043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.440 [2024-07-25 10:44:21.116052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.440 [2024-07-25 10:44:21.116070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.440 qpair failed and we were unable to recover it. 00:29:17.440 [2024-07-25 10:44:21.125911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.440 [2024-07-25 10:44:21.125995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.440 [2024-07-25 10:44:21.126014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.440 [2024-07-25 10:44:21.126023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.440 [2024-07-25 10:44:21.126032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.440 [2024-07-25 10:44:21.126050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.440 qpair failed and we were unable to recover it. 00:29:17.440 [2024-07-25 10:44:21.135882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.440 [2024-07-25 10:44:21.135966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.440 [2024-07-25 10:44:21.135985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.440 [2024-07-25 10:44:21.135994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.441 [2024-07-25 10:44:21.136003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.441 [2024-07-25 10:44:21.136020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.441 qpair failed and we were unable to recover it. 00:29:17.701 [2024-07-25 10:44:21.145928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.701 [2024-07-25 10:44:21.146010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.701 [2024-07-25 10:44:21.146028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.701 [2024-07-25 10:44:21.146039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.701 [2024-07-25 10:44:21.146048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.701 [2024-07-25 10:44:21.146068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.701 qpair failed and we were unable to recover it. 00:29:17.701 [2024-07-25 10:44:21.155920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.701 [2024-07-25 10:44:21.156023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.701 [2024-07-25 10:44:21.156042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.701 [2024-07-25 10:44:21.156052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.701 [2024-07-25 10:44:21.156061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.701 [2024-07-25 10:44:21.156078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.701 qpair failed and we were unable to recover it. 00:29:17.701 [2024-07-25 10:44:21.165949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.701 [2024-07-25 10:44:21.166024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.701 [2024-07-25 10:44:21.166042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.701 [2024-07-25 10:44:21.166052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.701 [2024-07-25 10:44:21.166061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.701 [2024-07-25 10:44:21.166079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.701 qpair failed and we were unable to recover it. 00:29:17.701 [2024-07-25 10:44:21.175916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.701 [2024-07-25 10:44:21.176080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.701 [2024-07-25 10:44:21.176102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.701 [2024-07-25 10:44:21.176112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.701 [2024-07-25 10:44:21.176121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.701 [2024-07-25 10:44:21.176138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.701 qpair failed and we were unable to recover it. 00:29:17.701 [2024-07-25 10:44:21.185941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.701 [2024-07-25 10:44:21.186049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.701 [2024-07-25 10:44:21.186067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.701 [2024-07-25 10:44:21.186077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.701 [2024-07-25 10:44:21.186085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.701 [2024-07-25 10:44:21.186103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.701 qpair failed and we were unable to recover it. 00:29:17.701 [2024-07-25 10:44:21.196041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.701 [2024-07-25 10:44:21.196125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.701 [2024-07-25 10:44:21.196143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.701 [2024-07-25 10:44:21.196153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.701 [2024-07-25 10:44:21.196161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.701 [2024-07-25 10:44:21.196179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.701 qpair failed and we were unable to recover it. 00:29:17.701 [2024-07-25 10:44:21.206056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.701 [2024-07-25 10:44:21.206139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.701 [2024-07-25 10:44:21.206157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.701 [2024-07-25 10:44:21.206167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.701 [2024-07-25 10:44:21.206177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.701 [2024-07-25 10:44:21.206195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.701 qpair failed and we were unable to recover it. 00:29:17.701 [2024-07-25 10:44:21.216132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.701 [2024-07-25 10:44:21.216209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.701 [2024-07-25 10:44:21.216227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.701 [2024-07-25 10:44:21.216237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.701 [2024-07-25 10:44:21.216246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.701 [2024-07-25 10:44:21.216263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.701 qpair failed and we were unable to recover it. 00:29:17.701 [2024-07-25 10:44:21.226127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.701 [2024-07-25 10:44:21.226210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.701 [2024-07-25 10:44:21.226227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.701 [2024-07-25 10:44:21.226237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.701 [2024-07-25 10:44:21.226245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.701 [2024-07-25 10:44:21.226267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.701 qpair failed and we were unable to recover it. 00:29:17.701 [2024-07-25 10:44:21.236135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.701 [2024-07-25 10:44:21.236222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.701 [2024-07-25 10:44:21.236240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.701 [2024-07-25 10:44:21.236250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.701 [2024-07-25 10:44:21.236259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.701 [2024-07-25 10:44:21.236276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.701 qpair failed and we were unable to recover it. 00:29:17.702 [2024-07-25 10:44:21.246165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.702 [2024-07-25 10:44:21.246241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.702 [2024-07-25 10:44:21.246259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.702 [2024-07-25 10:44:21.246269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.702 [2024-07-25 10:44:21.246278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.702 [2024-07-25 10:44:21.246295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.702 qpair failed and we were unable to recover it. 00:29:17.702 [2024-07-25 10:44:21.256137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.702 [2024-07-25 10:44:21.256235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.702 [2024-07-25 10:44:21.256253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.702 [2024-07-25 10:44:21.256264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.702 [2024-07-25 10:44:21.256273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.702 [2024-07-25 10:44:21.256290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.702 qpair failed and we were unable to recover it. 00:29:17.702 [2024-07-25 10:44:21.266218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.702 [2024-07-25 10:44:21.266303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.702 [2024-07-25 10:44:21.266324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.702 [2024-07-25 10:44:21.266334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.702 [2024-07-25 10:44:21.266343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.702 [2024-07-25 10:44:21.266360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.702 qpair failed and we were unable to recover it. 00:29:17.702 [2024-07-25 10:44:21.276252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.702 [2024-07-25 10:44:21.276365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.702 [2024-07-25 10:44:21.276383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.702 [2024-07-25 10:44:21.276393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.702 [2024-07-25 10:44:21.276402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.702 [2024-07-25 10:44:21.276419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.702 qpair failed and we were unable to recover it. 00:29:17.702 [2024-07-25 10:44:21.286296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.702 [2024-07-25 10:44:21.286379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.702 [2024-07-25 10:44:21.286398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.702 [2024-07-25 10:44:21.286408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.702 [2024-07-25 10:44:21.286416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.702 [2024-07-25 10:44:21.286434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.702 qpair failed and we were unable to recover it. 00:29:17.702 [2024-07-25 10:44:21.296247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.702 [2024-07-25 10:44:21.296328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.702 [2024-07-25 10:44:21.296346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.702 [2024-07-25 10:44:21.296356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.702 [2024-07-25 10:44:21.296365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.702 [2024-07-25 10:44:21.296382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.702 qpair failed and we were unable to recover it. 00:29:17.702 [2024-07-25 10:44:21.306324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.702 [2024-07-25 10:44:21.306491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.702 [2024-07-25 10:44:21.306510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.702 [2024-07-25 10:44:21.306519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.702 [2024-07-25 10:44:21.306528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.702 [2024-07-25 10:44:21.306550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.702 qpair failed and we were unable to recover it. 00:29:17.702 [2024-07-25 10:44:21.316364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.702 [2024-07-25 10:44:21.316443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.702 [2024-07-25 10:44:21.316462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.702 [2024-07-25 10:44:21.316471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.702 [2024-07-25 10:44:21.316480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.702 [2024-07-25 10:44:21.316497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.702 qpair failed and we were unable to recover it. 00:29:17.702 [2024-07-25 10:44:21.326359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.702 [2024-07-25 10:44:21.326442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.702 [2024-07-25 10:44:21.326462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.702 [2024-07-25 10:44:21.326472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.702 [2024-07-25 10:44:21.326481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.702 [2024-07-25 10:44:21.326499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.702 qpair failed and we were unable to recover it. 00:29:17.702 [2024-07-25 10:44:21.336426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.702 [2024-07-25 10:44:21.336506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.702 [2024-07-25 10:44:21.336525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.702 [2024-07-25 10:44:21.336535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.702 [2024-07-25 10:44:21.336544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.702 [2024-07-25 10:44:21.336562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.702 qpair failed and we were unable to recover it. 00:29:17.702 [2024-07-25 10:44:21.346435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.702 [2024-07-25 10:44:21.346515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.702 [2024-07-25 10:44:21.346534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.702 [2024-07-25 10:44:21.346544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.702 [2024-07-25 10:44:21.346552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.702 [2024-07-25 10:44:21.346569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.702 qpair failed and we were unable to recover it. 00:29:17.702 [2024-07-25 10:44:21.356482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.702 [2024-07-25 10:44:21.356563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.702 [2024-07-25 10:44:21.356584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.702 [2024-07-25 10:44:21.356594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.702 [2024-07-25 10:44:21.356603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.702 [2024-07-25 10:44:21.356620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.703 qpair failed and we were unable to recover it. 00:29:17.703 [2024-07-25 10:44:21.366471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.703 [2024-07-25 10:44:21.366552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.703 [2024-07-25 10:44:21.366571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.703 [2024-07-25 10:44:21.366581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.703 [2024-07-25 10:44:21.366590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.703 [2024-07-25 10:44:21.366607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.703 qpair failed and we were unable to recover it. 00:29:17.703 [2024-07-25 10:44:21.376539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.703 [2024-07-25 10:44:21.376618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.703 [2024-07-25 10:44:21.376636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.703 [2024-07-25 10:44:21.376646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.703 [2024-07-25 10:44:21.376654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.703 [2024-07-25 10:44:21.376672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.703 qpair failed and we were unable to recover it. 00:29:17.703 [2024-07-25 10:44:21.386564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.703 [2024-07-25 10:44:21.386646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.703 [2024-07-25 10:44:21.386664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.703 [2024-07-25 10:44:21.386674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.703 [2024-07-25 10:44:21.386682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.703 [2024-07-25 10:44:21.386700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.703 qpair failed and we were unable to recover it. 00:29:17.703 [2024-07-25 10:44:21.396593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.703 [2024-07-25 10:44:21.396675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.703 [2024-07-25 10:44:21.396694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.703 [2024-07-25 10:44:21.396703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.703 [2024-07-25 10:44:21.396712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.703 [2024-07-25 10:44:21.396736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.703 qpair failed and we were unable to recover it. 00:29:17.962 [2024-07-25 10:44:21.406601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-07-25 10:44:21.406689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-07-25 10:44:21.406707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-07-25 10:44:21.406723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-07-25 10:44:21.406732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.962 [2024-07-25 10:44:21.406749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.962 [2024-07-25 10:44:21.416650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-07-25 10:44:21.416735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-07-25 10:44:21.416753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-07-25 10:44:21.416763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-07-25 10:44:21.416771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.962 [2024-07-25 10:44:21.416789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.962 [2024-07-25 10:44:21.426602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-07-25 10:44:21.426735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-07-25 10:44:21.426753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-07-25 10:44:21.426763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-07-25 10:44:21.426772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.962 [2024-07-25 10:44:21.426789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.963 [2024-07-25 10:44:21.436669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.963 [2024-07-25 10:44:21.436767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.963 [2024-07-25 10:44:21.436786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.963 [2024-07-25 10:44:21.436795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.963 [2024-07-25 10:44:21.436804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.963 [2024-07-25 10:44:21.436821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.963 qpair failed and we were unable to recover it. 00:29:17.963 [2024-07-25 10:44:21.446770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.963 [2024-07-25 10:44:21.446855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.963 [2024-07-25 10:44:21.446877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.963 [2024-07-25 10:44:21.446887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.963 [2024-07-25 10:44:21.446895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.963 [2024-07-25 10:44:21.446914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.963 qpair failed and we were unable to recover it. 00:29:17.963 [2024-07-25 10:44:21.456776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.963 [2024-07-25 10:44:21.456867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.963 [2024-07-25 10:44:21.456885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.963 [2024-07-25 10:44:21.456895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.963 [2024-07-25 10:44:21.456904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.963 [2024-07-25 10:44:21.456921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.963 qpair failed and we were unable to recover it. 00:29:17.963 [2024-07-25 10:44:21.466781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.963 [2024-07-25 10:44:21.466862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.963 [2024-07-25 10:44:21.466881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.963 [2024-07-25 10:44:21.466891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.963 [2024-07-25 10:44:21.466900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.963 [2024-07-25 10:44:21.466917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.963 qpair failed and we were unable to recover it. 00:29:17.963 [2024-07-25 10:44:21.476843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.963 [2024-07-25 10:44:21.477011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.963 [2024-07-25 10:44:21.477030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.963 [2024-07-25 10:44:21.477039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.963 [2024-07-25 10:44:21.477048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.963 [2024-07-25 10:44:21.477066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.963 qpair failed and we were unable to recover it. 00:29:17.963 [2024-07-25 10:44:21.486789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.963 [2024-07-25 10:44:21.486919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.963 [2024-07-25 10:44:21.486938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.963 [2024-07-25 10:44:21.486947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.963 [2024-07-25 10:44:21.486960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.963 [2024-07-25 10:44:21.486980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.963 qpair failed and we were unable to recover it. 00:29:17.963 [2024-07-25 10:44:21.496883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.963 [2024-07-25 10:44:21.496960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.963 [2024-07-25 10:44:21.496978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.963 [2024-07-25 10:44:21.496988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.963 [2024-07-25 10:44:21.496997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.963 [2024-07-25 10:44:21.497014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.963 qpair failed and we were unable to recover it. 00:29:17.963 [2024-07-25 10:44:21.506916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.963 [2024-07-25 10:44:21.507041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.963 [2024-07-25 10:44:21.507059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.963 [2024-07-25 10:44:21.507069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.963 [2024-07-25 10:44:21.507078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.963 [2024-07-25 10:44:21.507095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.963 qpair failed and we were unable to recover it. 00:29:17.963 [2024-07-25 10:44:21.516861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.963 [2024-07-25 10:44:21.516945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.963 [2024-07-25 10:44:21.516964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.963 [2024-07-25 10:44:21.516973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.963 [2024-07-25 10:44:21.516982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.963 [2024-07-25 10:44:21.517000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.963 qpair failed and we were unable to recover it. 00:29:17.963 [2024-07-25 10:44:21.526951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.963 [2024-07-25 10:44:21.527077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.963 [2024-07-25 10:44:21.527095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.963 [2024-07-25 10:44:21.527105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.963 [2024-07-25 10:44:21.527114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.963 [2024-07-25 10:44:21.527131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.963 qpair failed and we were unable to recover it. 00:29:17.963 [2024-07-25 10:44:21.537026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.963 [2024-07-25 10:44:21.537157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.964 [2024-07-25 10:44:21.537176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.964 [2024-07-25 10:44:21.537187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.964 [2024-07-25 10:44:21.537196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.964 [2024-07-25 10:44:21.537214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.964 qpair failed and we were unable to recover it. 00:29:17.964 [2024-07-25 10:44:21.547003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.964 [2024-07-25 10:44:21.547081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.964 [2024-07-25 10:44:21.547099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.964 [2024-07-25 10:44:21.547109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.964 [2024-07-25 10:44:21.547118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.964 [2024-07-25 10:44:21.547135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.964 qpair failed and we were unable to recover it. 00:29:17.964 [2024-07-25 10:44:21.556964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.964 [2024-07-25 10:44:21.557061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.964 [2024-07-25 10:44:21.557080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.964 [2024-07-25 10:44:21.557090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.964 [2024-07-25 10:44:21.557098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.964 [2024-07-25 10:44:21.557116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.964 qpair failed and we were unable to recover it. 00:29:17.964 [2024-07-25 10:44:21.567059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.964 [2024-07-25 10:44:21.567133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.964 [2024-07-25 10:44:21.567152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.964 [2024-07-25 10:44:21.567162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.964 [2024-07-25 10:44:21.567170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.964 [2024-07-25 10:44:21.567188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.964 qpair failed and we were unable to recover it. 00:29:17.964 [2024-07-25 10:44:21.577145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.964 [2024-07-25 10:44:21.577223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.964 [2024-07-25 10:44:21.577241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.964 [2024-07-25 10:44:21.577251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.964 [2024-07-25 10:44:21.577266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.964 [2024-07-25 10:44:21.577283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.964 qpair failed and we were unable to recover it. 00:29:17.964 [2024-07-25 10:44:21.587118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.964 [2024-07-25 10:44:21.587285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.964 [2024-07-25 10:44:21.587303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.964 [2024-07-25 10:44:21.587313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.964 [2024-07-25 10:44:21.587322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.964 [2024-07-25 10:44:21.587339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.964 qpair failed and we were unable to recover it. 00:29:17.964 [2024-07-25 10:44:21.597122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.964 [2024-07-25 10:44:21.597208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.964 [2024-07-25 10:44:21.597227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.964 [2024-07-25 10:44:21.597237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.964 [2024-07-25 10:44:21.597246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.964 [2024-07-25 10:44:21.597264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.964 qpair failed and we were unable to recover it. 00:29:17.964 [2024-07-25 10:44:21.607194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.964 [2024-07-25 10:44:21.607275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.964 [2024-07-25 10:44:21.607292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.964 [2024-07-25 10:44:21.607301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.964 [2024-07-25 10:44:21.607310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.964 [2024-07-25 10:44:21.607328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.964 qpair failed and we were unable to recover it. 00:29:17.964 [2024-07-25 10:44:21.617228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.964 [2024-07-25 10:44:21.617310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.964 [2024-07-25 10:44:21.617329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.964 [2024-07-25 10:44:21.617338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.964 [2024-07-25 10:44:21.617347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.964 [2024-07-25 10:44:21.617365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.964 qpair failed and we were unable to recover it. 00:29:17.964 [2024-07-25 10:44:21.627171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.964 [2024-07-25 10:44:21.627255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.964 [2024-07-25 10:44:21.627274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.964 [2024-07-25 10:44:21.627284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.965 [2024-07-25 10:44:21.627292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.965 [2024-07-25 10:44:21.627309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.965 qpair failed and we were unable to recover it. 00:29:17.965 [2024-07-25 10:44:21.637288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.965 [2024-07-25 10:44:21.637391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.965 [2024-07-25 10:44:21.637409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.965 [2024-07-25 10:44:21.637419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.965 [2024-07-25 10:44:21.637427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.965 [2024-07-25 10:44:21.637445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.965 qpair failed and we were unable to recover it. 00:29:17.965 [2024-07-25 10:44:21.647232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.965 [2024-07-25 10:44:21.647308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.965 [2024-07-25 10:44:21.647326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.965 [2024-07-25 10:44:21.647336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.965 [2024-07-25 10:44:21.647344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.965 [2024-07-25 10:44:21.647361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.965 qpair failed and we were unable to recover it. 00:29:17.965 [2024-07-25 10:44:21.657369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.965 [2024-07-25 10:44:21.657451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.965 [2024-07-25 10:44:21.657469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.965 [2024-07-25 10:44:21.657479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.965 [2024-07-25 10:44:21.657487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:17.965 [2024-07-25 10:44:21.657505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.965 qpair failed and we were unable to recover it. 00:29:18.226 [2024-07-25 10:44:21.667343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-07-25 10:44:21.667424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-07-25 10:44:21.667442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-07-25 10:44:21.667453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-07-25 10:44:21.667465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.226 [2024-07-25 10:44:21.667482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-07-25 10:44:21.677383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-07-25 10:44:21.677464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-07-25 10:44:21.677482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-07-25 10:44:21.677492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-07-25 10:44:21.677501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.226 [2024-07-25 10:44:21.677518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-07-25 10:44:21.687423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-07-25 10:44:21.687507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-07-25 10:44:21.687525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-07-25 10:44:21.687535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-07-25 10:44:21.687544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.226 [2024-07-25 10:44:21.687562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-07-25 10:44:21.697463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-07-25 10:44:21.697545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-07-25 10:44:21.697564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-07-25 10:44:21.697574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-07-25 10:44:21.697583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.226 [2024-07-25 10:44:21.697600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-07-25 10:44:21.707443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-07-25 10:44:21.707523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-07-25 10:44:21.707541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-07-25 10:44:21.707550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-07-25 10:44:21.707559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.226 [2024-07-25 10:44:21.707576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-07-25 10:44:21.717507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-07-25 10:44:21.717625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-07-25 10:44:21.717643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-07-25 10:44:21.717653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-07-25 10:44:21.717662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.226 [2024-07-25 10:44:21.717679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-07-25 10:44:21.727450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-07-25 10:44:21.727529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-07-25 10:44:21.727548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-07-25 10:44:21.727558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-07-25 10:44:21.727566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.226 [2024-07-25 10:44:21.727583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-07-25 10:44:21.737533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-07-25 10:44:21.737702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-07-25 10:44:21.737724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-07-25 10:44:21.737734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-07-25 10:44:21.737743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.226 [2024-07-25 10:44:21.737760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-07-25 10:44:21.747553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-07-25 10:44:21.747634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-07-25 10:44:21.747653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-07-25 10:44:21.747664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-07-25 10:44:21.747672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.226 [2024-07-25 10:44:21.747690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-07-25 10:44:21.757613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-07-25 10:44:21.757701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-07-25 10:44:21.757724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-07-25 10:44:21.757738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-07-25 10:44:21.757747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.226 [2024-07-25 10:44:21.757780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-07-25 10:44:21.767610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-07-25 10:44:21.767690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-07-25 10:44:21.767709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-07-25 10:44:21.767724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-07-25 10:44:21.767733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.226 [2024-07-25 10:44:21.767750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-07-25 10:44:21.777633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-07-25 10:44:21.777710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-07-25 10:44:21.777732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-07-25 10:44:21.777742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-07-25 10:44:21.777751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.227 [2024-07-25 10:44:21.777768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-07-25 10:44:21.787667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-07-25 10:44:21.787746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-07-25 10:44:21.787765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-07-25 10:44:21.787775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-07-25 10:44:21.787784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.227 [2024-07-25 10:44:21.787801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-07-25 10:44:21.797687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-07-25 10:44:21.797789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-07-25 10:44:21.797807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-07-25 10:44:21.797817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-07-25 10:44:21.797825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.227 [2024-07-25 10:44:21.797843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-07-25 10:44:21.807745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-07-25 10:44:21.807829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-07-25 10:44:21.807848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-07-25 10:44:21.807858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-07-25 10:44:21.807866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.227 [2024-07-25 10:44:21.807884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-07-25 10:44:21.817782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-07-25 10:44:21.817862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-07-25 10:44:21.817881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-07-25 10:44:21.817891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-07-25 10:44:21.817899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.227 [2024-07-25 10:44:21.817916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-07-25 10:44:21.827803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-07-25 10:44:21.827884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-07-25 10:44:21.827902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-07-25 10:44:21.827912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-07-25 10:44:21.827920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.227 [2024-07-25 10:44:21.827938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-07-25 10:44:21.837847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-07-25 10:44:21.837931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-07-25 10:44:21.837949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-07-25 10:44:21.837958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-07-25 10:44:21.837967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.227 [2024-07-25 10:44:21.837984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-07-25 10:44:21.847875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-07-25 10:44:21.847959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-07-25 10:44:21.847977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-07-25 10:44:21.847990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-07-25 10:44:21.847998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.227 [2024-07-25 10:44:21.848015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-07-25 10:44:21.857838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-07-25 10:44:21.857918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-07-25 10:44:21.857936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-07-25 10:44:21.857946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-07-25 10:44:21.857955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.227 [2024-07-25 10:44:21.857972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-07-25 10:44:21.867919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-07-25 10:44:21.868007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-07-25 10:44:21.868025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-07-25 10:44:21.868035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-07-25 10:44:21.868044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.227 [2024-07-25 10:44:21.868061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-07-25 10:44:21.877969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-07-25 10:44:21.878056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-07-25 10:44:21.878075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-07-25 10:44:21.878084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-07-25 10:44:21.878093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.227 [2024-07-25 10:44:21.878110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-07-25 10:44:21.888043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-07-25 10:44:21.888149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-07-25 10:44:21.888167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-07-25 10:44:21.888177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-07-25 10:44:21.888186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.227 [2024-07-25 10:44:21.888202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-07-25 10:44:21.898026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-07-25 10:44:21.898104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-07-25 10:44:21.898122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-07-25 10:44:21.898132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-07-25 10:44:21.898141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.228 [2024-07-25 10:44:21.898158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.228 qpair failed and we were unable to recover it. 00:29:18.228 [2024-07-25 10:44:21.907998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.228 [2024-07-25 10:44:21.908075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.228 [2024-07-25 10:44:21.908093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.228 [2024-07-25 10:44:21.908102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.228 [2024-07-25 10:44:21.908111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.228 [2024-07-25 10:44:21.908128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.228 qpair failed and we were unable to recover it. 00:29:18.228 [2024-07-25 10:44:21.918069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.228 [2024-07-25 10:44:21.918149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.228 [2024-07-25 10:44:21.918167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.228 [2024-07-25 10:44:21.918176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.228 [2024-07-25 10:44:21.918185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.228 [2024-07-25 10:44:21.918202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.228 qpair failed and we were unable to recover it. 00:29:18.528 [2024-07-25 10:44:21.928095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.528 [2024-07-25 10:44:21.928173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.528 [2024-07-25 10:44:21.928191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.528 [2024-07-25 10:44:21.928201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.528 [2024-07-25 10:44:21.928210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.528 [2024-07-25 10:44:21.928227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.528 qpair failed and we were unable to recover it. 00:29:18.528 [2024-07-25 10:44:21.938125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.528 [2024-07-25 10:44:21.938236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.528 [2024-07-25 10:44:21.938255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.528 [2024-07-25 10:44:21.938268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.528 [2024-07-25 10:44:21.938277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.528 [2024-07-25 10:44:21.938294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.528 qpair failed and we were unable to recover it. 00:29:18.528 [2024-07-25 10:44:21.948193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.528 [2024-07-25 10:44:21.948302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.528 [2024-07-25 10:44:21.948320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.528 [2024-07-25 10:44:21.948330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.528 [2024-07-25 10:44:21.948338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.528 [2024-07-25 10:44:21.948355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.528 qpair failed and we were unable to recover it. 00:29:18.528 [2024-07-25 10:44:21.958180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.528 [2024-07-25 10:44:21.958267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.528 [2024-07-25 10:44:21.958286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.528 [2024-07-25 10:44:21.958295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.528 [2024-07-25 10:44:21.958304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.528 [2024-07-25 10:44:21.958321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.528 qpair failed and we were unable to recover it. 00:29:18.528 [2024-07-25 10:44:21.968244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.528 [2024-07-25 10:44:21.968325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.528 [2024-07-25 10:44:21.968343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.528 [2024-07-25 10:44:21.968352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.528 [2024-07-25 10:44:21.968361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.528 [2024-07-25 10:44:21.968378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.528 qpair failed and we were unable to recover it. 00:29:18.528 [2024-07-25 10:44:21.978157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.528 [2024-07-25 10:44:21.978240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.528 [2024-07-25 10:44:21.978258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.528 [2024-07-25 10:44:21.978267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.528 [2024-07-25 10:44:21.978276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.528 [2024-07-25 10:44:21.978293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.528 qpair failed and we were unable to recover it. 00:29:18.528 [2024-07-25 10:44:21.988243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.528 [2024-07-25 10:44:21.988325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.528 [2024-07-25 10:44:21.988344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.528 [2024-07-25 10:44:21.988353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.528 [2024-07-25 10:44:21.988361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.528 [2024-07-25 10:44:21.988379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.528 qpair failed and we were unable to recover it. 00:29:18.528 [2024-07-25 10:44:21.998323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.528 [2024-07-25 10:44:21.998437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.528 [2024-07-25 10:44:21.998456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.528 [2024-07-25 10:44:21.998465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.528 [2024-07-25 10:44:21.998474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.528 [2024-07-25 10:44:21.998491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.528 qpair failed and we were unable to recover it. 00:29:18.528 [2024-07-25 10:44:22.008335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.528 [2024-07-25 10:44:22.008417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.528 [2024-07-25 10:44:22.008435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.528 [2024-07-25 10:44:22.008445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.528 [2024-07-25 10:44:22.008453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.528 [2024-07-25 10:44:22.008470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.528 qpair failed and we were unable to recover it. 00:29:18.528 [2024-07-25 10:44:22.018311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.528 [2024-07-25 10:44:22.018415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.529 [2024-07-25 10:44:22.018434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.529 [2024-07-25 10:44:22.018444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.529 [2024-07-25 10:44:22.018452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.529 [2024-07-25 10:44:22.018469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.529 qpair failed and we were unable to recover it. 00:29:18.529 [2024-07-25 10:44:22.028316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.529 [2024-07-25 10:44:22.028395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.529 [2024-07-25 10:44:22.028416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.529 [2024-07-25 10:44:22.028425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.529 [2024-07-25 10:44:22.028434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.529 [2024-07-25 10:44:22.028451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.529 qpair failed and we were unable to recover it. 00:29:18.529 [2024-07-25 10:44:22.038392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.529 [2024-07-25 10:44:22.038477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.529 [2024-07-25 10:44:22.038494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.529 [2024-07-25 10:44:22.038504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.529 [2024-07-25 10:44:22.038513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.529 [2024-07-25 10:44:22.038530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.529 qpair failed and we were unable to recover it. 00:29:18.529 [2024-07-25 10:44:22.048454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.529 [2024-07-25 10:44:22.048532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.529 [2024-07-25 10:44:22.048550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.529 [2024-07-25 10:44:22.048559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.529 [2024-07-25 10:44:22.048568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.529 [2024-07-25 10:44:22.048585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.529 qpair failed and we were unable to recover it. 00:29:18.529 [2024-07-25 10:44:22.058484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.529 [2024-07-25 10:44:22.058564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.529 [2024-07-25 10:44:22.058582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.529 [2024-07-25 10:44:22.058592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.529 [2024-07-25 10:44:22.058600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.529 [2024-07-25 10:44:22.058618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.529 qpair failed and we were unable to recover it. 00:29:18.529 [2024-07-25 10:44:22.068514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.529 [2024-07-25 10:44:22.068603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.529 [2024-07-25 10:44:22.068620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.529 [2024-07-25 10:44:22.068630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.529 [2024-07-25 10:44:22.068639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.529 [2024-07-25 10:44:22.068659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.529 qpair failed and we were unable to recover it. 00:29:18.529 [2024-07-25 10:44:22.078536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.529 [2024-07-25 10:44:22.078618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.529 [2024-07-25 10:44:22.078636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.529 [2024-07-25 10:44:22.078645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.529 [2024-07-25 10:44:22.078653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.529 [2024-07-25 10:44:22.078671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.529 qpair failed and we were unable to recover it. 00:29:18.529 [2024-07-25 10:44:22.088541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.529 [2024-07-25 10:44:22.088621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.529 [2024-07-25 10:44:22.088639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.529 [2024-07-25 10:44:22.088648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.529 [2024-07-25 10:44:22.088656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.529 [2024-07-25 10:44:22.088674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.529 qpair failed and we were unable to recover it. 00:29:18.529 [2024-07-25 10:44:22.098550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.529 [2024-07-25 10:44:22.098624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.529 [2024-07-25 10:44:22.098642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.529 [2024-07-25 10:44:22.098651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.529 [2024-07-25 10:44:22.098660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.529 [2024-07-25 10:44:22.098677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.529 qpair failed and we were unable to recover it. 00:29:18.529 [2024-07-25 10:44:22.108610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.529 [2024-07-25 10:44:22.108689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.529 [2024-07-25 10:44:22.108706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.529 [2024-07-25 10:44:22.108720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.529 [2024-07-25 10:44:22.108729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.529 [2024-07-25 10:44:22.108746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.529 qpair failed and we were unable to recover it. 00:29:18.529 [2024-07-25 10:44:22.118655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.529 [2024-07-25 10:44:22.118740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.529 [2024-07-25 10:44:22.118760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.529 [2024-07-25 10:44:22.118770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.529 [2024-07-25 10:44:22.118778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.529 [2024-07-25 10:44:22.118795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.529 qpair failed and we were unable to recover it. 00:29:18.529 [2024-07-25 10:44:22.128607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.529 [2024-07-25 10:44:22.128697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.529 [2024-07-25 10:44:22.128718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.529 [2024-07-25 10:44:22.128728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.529 [2024-07-25 10:44:22.128736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.529 [2024-07-25 10:44:22.128753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.529 qpair failed and we were unable to recover it. 00:29:18.529 [2024-07-25 10:44:22.138701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.529 [2024-07-25 10:44:22.138784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.529 [2024-07-25 10:44:22.138802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.530 [2024-07-25 10:44:22.138811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.530 [2024-07-25 10:44:22.138820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.530 [2024-07-25 10:44:22.138837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.530 qpair failed and we were unable to recover it. 00:29:18.530 [2024-07-25 10:44:22.148745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.530 [2024-07-25 10:44:22.148827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.530 [2024-07-25 10:44:22.148845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.530 [2024-07-25 10:44:22.148854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.530 [2024-07-25 10:44:22.148862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.530 [2024-07-25 10:44:22.148880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.530 qpair failed and we were unable to recover it. 00:29:18.530 [2024-07-25 10:44:22.158736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.530 [2024-07-25 10:44:22.158821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.530 [2024-07-25 10:44:22.158839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.530 [2024-07-25 10:44:22.158848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.530 [2024-07-25 10:44:22.158857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.530 [2024-07-25 10:44:22.158877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.530 qpair failed and we were unable to recover it. 00:29:18.530 [2024-07-25 10:44:22.168711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.530 [2024-07-25 10:44:22.168792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.530 [2024-07-25 10:44:22.168810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.530 [2024-07-25 10:44:22.168820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.530 [2024-07-25 10:44:22.168828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.530 [2024-07-25 10:44:22.168846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.530 qpair failed and we were unable to recover it. 00:29:18.530 [2024-07-25 10:44:22.178865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.530 [2024-07-25 10:44:22.178943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.530 [2024-07-25 10:44:22.178960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.530 [2024-07-25 10:44:22.178969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.530 [2024-07-25 10:44:22.178978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.530 [2024-07-25 10:44:22.178995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.530 qpair failed and we were unable to recover it. 00:29:18.530 [2024-07-25 10:44:22.188851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.530 [2024-07-25 10:44:22.188928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.530 [2024-07-25 10:44:22.188945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.530 [2024-07-25 10:44:22.188955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.530 [2024-07-25 10:44:22.188964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.530 [2024-07-25 10:44:22.188981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.530 qpair failed and we were unable to recover it. 00:29:18.530 [2024-07-25 10:44:22.198867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.530 [2024-07-25 10:44:22.198944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.530 [2024-07-25 10:44:22.198962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.530 [2024-07-25 10:44:22.198971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.530 [2024-07-25 10:44:22.198980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.530 [2024-07-25 10:44:22.198997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.530 qpair failed and we were unable to recover it. 00:29:18.530 [2024-07-25 10:44:22.208912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.530 [2024-07-25 10:44:22.208993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.530 [2024-07-25 10:44:22.209013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.530 [2024-07-25 10:44:22.209023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.530 [2024-07-25 10:44:22.209031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.530 [2024-07-25 10:44:22.209049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.530 qpair failed and we were unable to recover it. 00:29:18.530 [2024-07-25 10:44:22.218942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.530 [2024-07-25 10:44:22.219020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.530 [2024-07-25 10:44:22.219038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.530 [2024-07-25 10:44:22.219047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.530 [2024-07-25 10:44:22.219055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.530 [2024-07-25 10:44:22.219072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.530 qpair failed and we were unable to recover it. 00:29:18.530 [2024-07-25 10:44:22.228909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.530 [2024-07-25 10:44:22.228996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.530 [2024-07-25 10:44:22.229014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.530 [2024-07-25 10:44:22.229023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.530 [2024-07-25 10:44:22.229031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.530 [2024-07-25 10:44:22.229048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.530 qpair failed and we were unable to recover it. 00:29:18.790 [2024-07-25 10:44:22.238993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.790 [2024-07-25 10:44:22.239075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.790 [2024-07-25 10:44:22.239092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.790 [2024-07-25 10:44:22.239102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.790 [2024-07-25 10:44:22.239110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.790 [2024-07-25 10:44:22.239128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.790 qpair failed and we were unable to recover it. 00:29:18.790 [2024-07-25 10:44:22.249022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.790 [2024-07-25 10:44:22.249189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.790 [2024-07-25 10:44:22.249207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.790 [2024-07-25 10:44:22.249217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.790 [2024-07-25 10:44:22.249225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.790 [2024-07-25 10:44:22.249245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.790 qpair failed and we were unable to recover it. 00:29:18.790 [2024-07-25 10:44:22.259062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.790 [2024-07-25 10:44:22.259170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.790 [2024-07-25 10:44:22.259189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.790 [2024-07-25 10:44:22.259198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.790 [2024-07-25 10:44:22.259207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.790 [2024-07-25 10:44:22.259224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.790 qpair failed and we were unable to recover it. 00:29:18.790 [2024-07-25 10:44:22.269111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.790 [2024-07-25 10:44:22.269199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.790 [2024-07-25 10:44:22.269217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.790 [2024-07-25 10:44:22.269226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.790 [2024-07-25 10:44:22.269235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.791 [2024-07-25 10:44:22.269252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.791 qpair failed and we were unable to recover it. 00:29:18.791 [2024-07-25 10:44:22.279113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.791 [2024-07-25 10:44:22.279195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.791 [2024-07-25 10:44:22.279212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.791 [2024-07-25 10:44:22.279221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.791 [2024-07-25 10:44:22.279230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.791 [2024-07-25 10:44:22.279247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.791 qpair failed and we were unable to recover it. 00:29:18.791 [2024-07-25 10:44:22.289130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.791 [2024-07-25 10:44:22.289214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.791 [2024-07-25 10:44:22.289232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.791 [2024-07-25 10:44:22.289241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.791 [2024-07-25 10:44:22.289250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.791 [2024-07-25 10:44:22.289266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.791 qpair failed and we were unable to recover it. 00:29:18.791 [2024-07-25 10:44:22.299171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.791 [2024-07-25 10:44:22.299251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.791 [2024-07-25 10:44:22.299275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.791 [2024-07-25 10:44:22.299284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.791 [2024-07-25 10:44:22.299292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.791 [2024-07-25 10:44:22.299310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.791 qpair failed and we were unable to recover it. 00:29:18.791 [2024-07-25 10:44:22.309221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.791 [2024-07-25 10:44:22.309299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.791 [2024-07-25 10:44:22.309317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.791 [2024-07-25 10:44:22.309326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.791 [2024-07-25 10:44:22.309334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.791 [2024-07-25 10:44:22.309351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.791 qpair failed and we were unable to recover it. 00:29:18.791 [2024-07-25 10:44:22.319221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.791 [2024-07-25 10:44:22.319307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.791 [2024-07-25 10:44:22.319324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.791 [2024-07-25 10:44:22.319334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.791 [2024-07-25 10:44:22.319342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.791 [2024-07-25 10:44:22.319359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.791 qpair failed and we were unable to recover it. 00:29:18.791 [2024-07-25 10:44:22.329245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.791 [2024-07-25 10:44:22.329377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.791 [2024-07-25 10:44:22.329397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.791 [2024-07-25 10:44:22.329407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.791 [2024-07-25 10:44:22.329416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.791 [2024-07-25 10:44:22.329434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.791 qpair failed and we were unable to recover it. 00:29:18.791 [2024-07-25 10:44:22.339290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.791 [2024-07-25 10:44:22.339367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.791 [2024-07-25 10:44:22.339385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.791 [2024-07-25 10:44:22.339394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.791 [2024-07-25 10:44:22.339406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.791 [2024-07-25 10:44:22.339423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.791 qpair failed and we were unable to recover it. 00:29:18.791 [2024-07-25 10:44:22.349321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.791 [2024-07-25 10:44:22.349452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.791 [2024-07-25 10:44:22.349470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.791 [2024-07-25 10:44:22.349480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.791 [2024-07-25 10:44:22.349489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.791 [2024-07-25 10:44:22.349505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.791 qpair failed and we were unable to recover it. 00:29:18.791 [2024-07-25 10:44:22.359345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.791 [2024-07-25 10:44:22.359426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.791 [2024-07-25 10:44:22.359443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.791 [2024-07-25 10:44:22.359453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.791 [2024-07-25 10:44:22.359461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.791 [2024-07-25 10:44:22.359478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.791 qpair failed and we were unable to recover it. 00:29:18.791 [2024-07-25 10:44:22.369409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.791 [2024-07-25 10:44:22.369518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.791 [2024-07-25 10:44:22.369537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.791 [2024-07-25 10:44:22.369546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.791 [2024-07-25 10:44:22.369555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.791 [2024-07-25 10:44:22.369572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.791 qpair failed and we were unable to recover it. 00:29:18.791 [2024-07-25 10:44:22.379426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.791 [2024-07-25 10:44:22.379553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.791 [2024-07-25 10:44:22.379571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.791 [2024-07-25 10:44:22.379581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.791 [2024-07-25 10:44:22.379589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.791 [2024-07-25 10:44:22.379607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.791 qpair failed and we were unable to recover it. 00:29:18.791 [2024-07-25 10:44:22.389439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.791 [2024-07-25 10:44:22.389550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.791 [2024-07-25 10:44:22.389568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.791 [2024-07-25 10:44:22.389578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.791 [2024-07-25 10:44:22.389587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.791 [2024-07-25 10:44:22.389604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.791 qpair failed and we were unable to recover it. 00:29:18.792 [2024-07-25 10:44:22.399421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.792 [2024-07-25 10:44:22.399505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.792 [2024-07-25 10:44:22.399522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.792 [2024-07-25 10:44:22.399532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.792 [2024-07-25 10:44:22.399541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.792 [2024-07-25 10:44:22.399558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.792 qpair failed and we were unable to recover it. 00:29:18.792 [2024-07-25 10:44:22.409436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.792 [2024-07-25 10:44:22.409520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.792 [2024-07-25 10:44:22.409538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.792 [2024-07-25 10:44:22.409547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.792 [2024-07-25 10:44:22.409556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.792 [2024-07-25 10:44:22.409573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.792 qpair failed and we were unable to recover it. 00:29:18.792 [2024-07-25 10:44:22.419534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.792 [2024-07-25 10:44:22.419610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.792 [2024-07-25 10:44:22.419628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.792 [2024-07-25 10:44:22.419637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.792 [2024-07-25 10:44:22.419646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.792 [2024-07-25 10:44:22.419663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.792 qpair failed and we were unable to recover it. 00:29:18.792 [2024-07-25 10:44:22.429529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.792 [2024-07-25 10:44:22.429611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.792 [2024-07-25 10:44:22.429629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.792 [2024-07-25 10:44:22.429638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.792 [2024-07-25 10:44:22.429650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.792 [2024-07-25 10:44:22.429668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.792 qpair failed and we were unable to recover it. 00:29:18.792 [2024-07-25 10:44:22.439553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.792 [2024-07-25 10:44:22.439633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.792 [2024-07-25 10:44:22.439651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.792 [2024-07-25 10:44:22.439661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.792 [2024-07-25 10:44:22.439670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.792 [2024-07-25 10:44:22.439687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.792 qpair failed and we were unable to recover it. 00:29:18.792 [2024-07-25 10:44:22.449512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.792 [2024-07-25 10:44:22.449591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.792 [2024-07-25 10:44:22.449608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.792 [2024-07-25 10:44:22.449618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.792 [2024-07-25 10:44:22.449626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.792 [2024-07-25 10:44:22.449643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.792 qpair failed and we were unable to recover it. 00:29:18.792 [2024-07-25 10:44:22.459570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.792 [2024-07-25 10:44:22.459653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.792 [2024-07-25 10:44:22.459670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.792 [2024-07-25 10:44:22.459680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.792 [2024-07-25 10:44:22.459689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.792 [2024-07-25 10:44:22.459706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.792 qpair failed and we were unable to recover it. 00:29:18.792 [2024-07-25 10:44:22.469614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.792 [2024-07-25 10:44:22.469711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.792 [2024-07-25 10:44:22.469732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.792 [2024-07-25 10:44:22.469742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.792 [2024-07-25 10:44:22.469751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.792 [2024-07-25 10:44:22.469768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.792 qpair failed and we were unable to recover it. 00:29:18.792 [2024-07-25 10:44:22.479665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.792 [2024-07-25 10:44:22.479753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.792 [2024-07-25 10:44:22.479771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.792 [2024-07-25 10:44:22.479780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.792 [2024-07-25 10:44:22.479789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.792 [2024-07-25 10:44:22.479807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.792 qpair failed and we were unable to recover it. 00:29:18.792 [2024-07-25 10:44:22.489677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.792 [2024-07-25 10:44:22.489761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.792 [2024-07-25 10:44:22.489779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.792 [2024-07-25 10:44:22.489789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.792 [2024-07-25 10:44:22.489797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:18.792 [2024-07-25 10:44:22.489815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.792 qpair failed and we were unable to recover it. 00:29:19.053 [2024-07-25 10:44:22.499698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.053 [2024-07-25 10:44:22.499792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.053 [2024-07-25 10:44:22.499810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.053 [2024-07-25 10:44:22.499820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.053 [2024-07-25 10:44:22.499828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.053 [2024-07-25 10:44:22.499846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-07-25 10:44:22.509765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.053 [2024-07-25 10:44:22.509867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.053 [2024-07-25 10:44:22.509888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.053 [2024-07-25 10:44:22.509898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.053 [2024-07-25 10:44:22.509906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.053 [2024-07-25 10:44:22.509924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-07-25 10:44:22.519744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.053 [2024-07-25 10:44:22.519823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.053 [2024-07-25 10:44:22.519840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.053 [2024-07-25 10:44:22.519853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.053 [2024-07-25 10:44:22.519861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.053 [2024-07-25 10:44:22.519879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-07-25 10:44:22.529822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.053 [2024-07-25 10:44:22.529902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.053 [2024-07-25 10:44:22.529920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.053 [2024-07-25 10:44:22.529929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.053 [2024-07-25 10:44:22.529937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.053 [2024-07-25 10:44:22.529954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-07-25 10:44:22.539860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.053 [2024-07-25 10:44:22.539940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.053 [2024-07-25 10:44:22.539958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.053 [2024-07-25 10:44:22.539967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.053 [2024-07-25 10:44:22.539976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.053 [2024-07-25 10:44:22.539993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-07-25 10:44:22.549875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.053 [2024-07-25 10:44:22.549954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.053 [2024-07-25 10:44:22.549971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.053 [2024-07-25 10:44:22.549981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.053 [2024-07-25 10:44:22.549989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.053 [2024-07-25 10:44:22.550006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-07-25 10:44:22.559870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.053 [2024-07-25 10:44:22.559964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.053 [2024-07-25 10:44:22.559982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.053 [2024-07-25 10:44:22.559991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.053 [2024-07-25 10:44:22.560000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.053 [2024-07-25 10:44:22.560017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-07-25 10:44:22.569927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.053 [2024-07-25 10:44:22.570003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.053 [2024-07-25 10:44:22.570021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.053 [2024-07-25 10:44:22.570030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.053 [2024-07-25 10:44:22.570039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.054 [2024-07-25 10:44:22.570056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-07-25 10:44:22.579956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.054 [2024-07-25 10:44:22.580038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.054 [2024-07-25 10:44:22.580056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.054 [2024-07-25 10:44:22.580065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.054 [2024-07-25 10:44:22.580074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.054 [2024-07-25 10:44:22.580091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-07-25 10:44:22.589983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.054 [2024-07-25 10:44:22.590064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.054 [2024-07-25 10:44:22.590081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.054 [2024-07-25 10:44:22.590091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.054 [2024-07-25 10:44:22.590099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.054 [2024-07-25 10:44:22.590116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-07-25 10:44:22.600005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.054 [2024-07-25 10:44:22.600085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.054 [2024-07-25 10:44:22.600103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.054 [2024-07-25 10:44:22.600112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.054 [2024-07-25 10:44:22.600121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.054 [2024-07-25 10:44:22.600138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-07-25 10:44:22.610050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.054 [2024-07-25 10:44:22.610126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.054 [2024-07-25 10:44:22.610143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.054 [2024-07-25 10:44:22.610156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.054 [2024-07-25 10:44:22.610164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.054 [2024-07-25 10:44:22.610181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-07-25 10:44:22.620058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.054 [2024-07-25 10:44:22.620157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.054 [2024-07-25 10:44:22.620178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.054 [2024-07-25 10:44:22.620188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.054 [2024-07-25 10:44:22.620197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.054 [2024-07-25 10:44:22.620213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-07-25 10:44:22.630072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.054 [2024-07-25 10:44:22.630153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.054 [2024-07-25 10:44:22.630171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.054 [2024-07-25 10:44:22.630180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.054 [2024-07-25 10:44:22.630188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.054 [2024-07-25 10:44:22.630205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-07-25 10:44:22.640106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.054 [2024-07-25 10:44:22.640187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.054 [2024-07-25 10:44:22.640205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.054 [2024-07-25 10:44:22.640214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.054 [2024-07-25 10:44:22.640222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.054 [2024-07-25 10:44:22.640239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-07-25 10:44:22.650146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.054 [2024-07-25 10:44:22.650263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.054 [2024-07-25 10:44:22.650281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.054 [2024-07-25 10:44:22.650291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.054 [2024-07-25 10:44:22.650299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.054 [2024-07-25 10:44:22.650316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-07-25 10:44:22.660197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.054 [2024-07-25 10:44:22.660278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.054 [2024-07-25 10:44:22.660296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.054 [2024-07-25 10:44:22.660305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.054 [2024-07-25 10:44:22.660314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.054 [2024-07-25 10:44:22.660331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-07-25 10:44:22.670199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.054 [2024-07-25 10:44:22.670277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.054 [2024-07-25 10:44:22.670295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.054 [2024-07-25 10:44:22.670304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.054 [2024-07-25 10:44:22.670313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.054 [2024-07-25 10:44:22.670330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-07-25 10:44:22.680246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.054 [2024-07-25 10:44:22.680362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.054 [2024-07-25 10:44:22.680380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.054 [2024-07-25 10:44:22.680390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.054 [2024-07-25 10:44:22.680398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.054 [2024-07-25 10:44:22.680415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-07-25 10:44:22.690248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.054 [2024-07-25 10:44:22.690333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.054 [2024-07-25 10:44:22.690350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.054 [2024-07-25 10:44:22.690360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.055 [2024-07-25 10:44:22.690368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.055 [2024-07-25 10:44:22.690385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.055 qpair failed and we were unable to recover it. 00:29:19.055 [2024-07-25 10:44:22.700293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.055 [2024-07-25 10:44:22.700374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.055 [2024-07-25 10:44:22.700391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.055 [2024-07-25 10:44:22.700404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.055 [2024-07-25 10:44:22.700412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.055 [2024-07-25 10:44:22.700429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.055 qpair failed and we were unable to recover it. 00:29:19.055 [2024-07-25 10:44:22.710286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.055 [2024-07-25 10:44:22.710391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.055 [2024-07-25 10:44:22.710408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.055 [2024-07-25 10:44:22.710418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.055 [2024-07-25 10:44:22.710426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.055 [2024-07-25 10:44:22.710443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.055 qpair failed and we were unable to recover it. 00:29:19.055 [2024-07-25 10:44:22.720309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.055 [2024-07-25 10:44:22.720477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.055 [2024-07-25 10:44:22.720496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.055 [2024-07-25 10:44:22.720506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.055 [2024-07-25 10:44:22.720514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.055 [2024-07-25 10:44:22.720532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.055 qpair failed and we were unable to recover it. 00:29:19.055 [2024-07-25 10:44:22.730332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.055 [2024-07-25 10:44:22.730411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.055 [2024-07-25 10:44:22.730428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.055 [2024-07-25 10:44:22.730438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.055 [2024-07-25 10:44:22.730446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.055 [2024-07-25 10:44:22.730464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.055 qpair failed and we were unable to recover it. 00:29:19.055 [2024-07-25 10:44:22.740422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.055 [2024-07-25 10:44:22.740505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.055 [2024-07-25 10:44:22.740522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.055 [2024-07-25 10:44:22.740532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.055 [2024-07-25 10:44:22.740540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.055 [2024-07-25 10:44:22.740557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.055 qpair failed and we were unable to recover it. 00:29:19.055 [2024-07-25 10:44:22.750358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.055 [2024-07-25 10:44:22.750436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.055 [2024-07-25 10:44:22.750453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.055 [2024-07-25 10:44:22.750463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.055 [2024-07-25 10:44:22.750471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.055 [2024-07-25 10:44:22.750488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.055 qpair failed and we were unable to recover it. 00:29:19.315 [2024-07-25 10:44:22.760430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.315 [2024-07-25 10:44:22.760512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.315 [2024-07-25 10:44:22.760530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.315 [2024-07-25 10:44:22.760540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.315 [2024-07-25 10:44:22.760548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.315 [2024-07-25 10:44:22.760565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.315 qpair failed and we were unable to recover it. 00:29:19.315 [2024-07-25 10:44:22.770478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.315 [2024-07-25 10:44:22.770559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.315 [2024-07-25 10:44:22.770577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.315 [2024-07-25 10:44:22.770586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.315 [2024-07-25 10:44:22.770595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.315 [2024-07-25 10:44:22.770612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.315 qpair failed and we were unable to recover it. 00:29:19.315 [2024-07-25 10:44:22.780474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.315 [2024-07-25 10:44:22.780555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.315 [2024-07-25 10:44:22.780573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.315 [2024-07-25 10:44:22.780583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.315 [2024-07-25 10:44:22.780591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.315 [2024-07-25 10:44:22.780609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.315 qpair failed and we were unable to recover it. 00:29:19.315 [2024-07-25 10:44:22.790522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.315 [2024-07-25 10:44:22.790602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.316 [2024-07-25 10:44:22.790624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.316 [2024-07-25 10:44:22.790633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.316 [2024-07-25 10:44:22.790642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.316 [2024-07-25 10:44:22.790659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.316 qpair failed and we were unable to recover it. 00:29:19.316 [2024-07-25 10:44:22.800557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.316 [2024-07-25 10:44:22.800638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.316 [2024-07-25 10:44:22.800656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.316 [2024-07-25 10:44:22.800666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.316 [2024-07-25 10:44:22.800674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.316 [2024-07-25 10:44:22.800691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.316 qpair failed and we were unable to recover it. 00:29:19.316 [2024-07-25 10:44:22.810606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.316 [2024-07-25 10:44:22.810718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.316 [2024-07-25 10:44:22.810737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.316 [2024-07-25 10:44:22.810747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.316 [2024-07-25 10:44:22.810755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.316 [2024-07-25 10:44:22.810772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.316 qpair failed and we were unable to recover it. 00:29:19.316 [2024-07-25 10:44:22.820584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.316 [2024-07-25 10:44:22.820772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.316 [2024-07-25 10:44:22.820791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.316 [2024-07-25 10:44:22.820801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.316 [2024-07-25 10:44:22.820809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.316 [2024-07-25 10:44:22.820827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.316 qpair failed and we were unable to recover it. 00:29:19.316 [2024-07-25 10:44:22.830590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.316 [2024-07-25 10:44:22.830670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.316 [2024-07-25 10:44:22.830687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.316 [2024-07-25 10:44:22.830697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.316 [2024-07-25 10:44:22.830705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.316 [2024-07-25 10:44:22.830731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.316 qpair failed and we were unable to recover it. 00:29:19.316 [2024-07-25 10:44:22.840671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.316 [2024-07-25 10:44:22.840751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.316 [2024-07-25 10:44:22.840769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.316 [2024-07-25 10:44:22.840778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.316 [2024-07-25 10:44:22.840787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.316 [2024-07-25 10:44:22.840804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.316 qpair failed and we were unable to recover it. 00:29:19.316 [2024-07-25 10:44:22.850685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.316 [2024-07-25 10:44:22.850853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.316 [2024-07-25 10:44:22.850871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.316 [2024-07-25 10:44:22.850881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.316 [2024-07-25 10:44:22.850890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.316 [2024-07-25 10:44:22.850907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.316 qpair failed and we were unable to recover it. 00:29:19.316 [2024-07-25 10:44:22.860736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.316 [2024-07-25 10:44:22.860896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.316 [2024-07-25 10:44:22.860915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.316 [2024-07-25 10:44:22.860925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.316 [2024-07-25 10:44:22.860933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.316 [2024-07-25 10:44:22.860950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.316 qpair failed and we were unable to recover it. 00:29:19.316 [2024-07-25 10:44:22.870803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.316 [2024-07-25 10:44:22.870891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.316 [2024-07-25 10:44:22.870909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.316 [2024-07-25 10:44:22.870919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.316 [2024-07-25 10:44:22.870928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.316 [2024-07-25 10:44:22.870945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.316 qpair failed and we were unable to recover it. 00:29:19.316 [2024-07-25 10:44:22.880782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.316 [2024-07-25 10:44:22.880870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.316 [2024-07-25 10:44:22.880890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.316 [2024-07-25 10:44:22.880900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.316 [2024-07-25 10:44:22.880908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.316 [2024-07-25 10:44:22.880925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.316 qpair failed and we were unable to recover it. 00:29:19.316 [2024-07-25 10:44:22.890825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.316 [2024-07-25 10:44:22.890997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.316 [2024-07-25 10:44:22.891015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.316 [2024-07-25 10:44:22.891025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.316 [2024-07-25 10:44:22.891034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.316 [2024-07-25 10:44:22.891052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.316 qpair failed and we were unable to recover it. 00:29:19.316 [2024-07-25 10:44:22.900880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.316 [2024-07-25 10:44:22.900960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.316 [2024-07-25 10:44:22.900977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.316 [2024-07-25 10:44:22.900987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.316 [2024-07-25 10:44:22.900995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.316 [2024-07-25 10:44:22.901012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.316 qpair failed and we were unable to recover it. 00:29:19.316 [2024-07-25 10:44:22.910875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.316 [2024-07-25 10:44:22.910959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.317 [2024-07-25 10:44:22.910976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.317 [2024-07-25 10:44:22.910986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.317 [2024-07-25 10:44:22.910994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.317 [2024-07-25 10:44:22.911012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.317 qpair failed and we were unable to recover it. 00:29:19.317 [2024-07-25 10:44:22.920956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.317 [2024-07-25 10:44:22.921038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.317 [2024-07-25 10:44:22.921056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.317 [2024-07-25 10:44:22.921065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.317 [2024-07-25 10:44:22.921074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.317 [2024-07-25 10:44:22.921094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.317 qpair failed and we were unable to recover it. 00:29:19.317 [2024-07-25 10:44:22.930956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.317 [2024-07-25 10:44:22.931037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.317 [2024-07-25 10:44:22.931055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.317 [2024-07-25 10:44:22.931065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.317 [2024-07-25 10:44:22.931073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.317 [2024-07-25 10:44:22.931091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.317 qpair failed and we were unable to recover it. 00:29:19.317 [2024-07-25 10:44:22.940954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.317 [2024-07-25 10:44:22.941083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.317 [2024-07-25 10:44:22.941101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.317 [2024-07-25 10:44:22.941111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.317 [2024-07-25 10:44:22.941120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.317 [2024-07-25 10:44:22.941137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.317 qpair failed and we were unable to recover it. 00:29:19.317 [2024-07-25 10:44:22.950967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.317 [2024-07-25 10:44:22.951054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.317 [2024-07-25 10:44:22.951071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.317 [2024-07-25 10:44:22.951080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.317 [2024-07-25 10:44:22.951089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.317 [2024-07-25 10:44:22.951107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.317 qpair failed and we were unable to recover it. 00:29:19.317 [2024-07-25 10:44:22.961081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.317 [2024-07-25 10:44:22.961163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.317 [2024-07-25 10:44:22.961181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.317 [2024-07-25 10:44:22.961190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.317 [2024-07-25 10:44:22.961199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.317 [2024-07-25 10:44:22.961216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.317 qpair failed and we were unable to recover it. 00:29:19.317 [2024-07-25 10:44:22.971049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.317 [2024-07-25 10:44:22.971128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.317 [2024-07-25 10:44:22.971149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.317 [2024-07-25 10:44:22.971159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.317 [2024-07-25 10:44:22.971167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.317 [2024-07-25 10:44:22.971185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.317 qpair failed and we were unable to recover it. 00:29:19.317 [2024-07-25 10:44:22.981071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.317 [2024-07-25 10:44:22.981169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.317 [2024-07-25 10:44:22.981187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.317 [2024-07-25 10:44:22.981196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.317 [2024-07-25 10:44:22.981205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.317 [2024-07-25 10:44:22.981222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.317 qpair failed and we were unable to recover it. 00:29:19.317 [2024-07-25 10:44:22.991122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.317 [2024-07-25 10:44:22.991206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.317 [2024-07-25 10:44:22.991224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.317 [2024-07-25 10:44:22.991234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.317 [2024-07-25 10:44:22.991244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.317 [2024-07-25 10:44:22.991261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.317 qpair failed and we were unable to recover it. 00:29:19.317 [2024-07-25 10:44:23.001079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.317 [2024-07-25 10:44:23.001165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.317 [2024-07-25 10:44:23.001183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.317 [2024-07-25 10:44:23.001192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.317 [2024-07-25 10:44:23.001201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.317 [2024-07-25 10:44:23.001218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.317 qpair failed and we were unable to recover it. 00:29:19.317 [2024-07-25 10:44:23.011188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.317 [2024-07-25 10:44:23.011267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.317 [2024-07-25 10:44:23.011285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.317 [2024-07-25 10:44:23.011295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.317 [2024-07-25 10:44:23.011303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.317 [2024-07-25 10:44:23.011323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.317 qpair failed and we were unable to recover it. 00:29:19.578 [2024-07-25 10:44:23.021175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.578 [2024-07-25 10:44:23.021255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.578 [2024-07-25 10:44:23.021273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.578 [2024-07-25 10:44:23.021282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.578 [2024-07-25 10:44:23.021291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.578 [2024-07-25 10:44:23.021308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.578 qpair failed and we were unable to recover it. 00:29:19.578 [2024-07-25 10:44:23.031157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.578 [2024-07-25 10:44:23.031241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.578 [2024-07-25 10:44:23.031259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.578 [2024-07-25 10:44:23.031268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.578 [2024-07-25 10:44:23.031276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.578 [2024-07-25 10:44:23.031293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.578 qpair failed and we were unable to recover it. 00:29:19.578 [2024-07-25 10:44:23.041224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.578 [2024-07-25 10:44:23.041309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.578 [2024-07-25 10:44:23.041326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.578 [2024-07-25 10:44:23.041336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.578 [2024-07-25 10:44:23.041345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.578 [2024-07-25 10:44:23.041363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.578 qpair failed and we were unable to recover it. 00:29:19.578 [2024-07-25 10:44:23.051248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.578 [2024-07-25 10:44:23.051332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.578 [2024-07-25 10:44:23.051350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.578 [2024-07-25 10:44:23.051359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.578 [2024-07-25 10:44:23.051368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.578 [2024-07-25 10:44:23.051385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.578 qpair failed and we were unable to recover it. 00:29:19.578 [2024-07-25 10:44:23.061235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.578 [2024-07-25 10:44:23.061318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.578 [2024-07-25 10:44:23.061342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.578 [2024-07-25 10:44:23.061352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.578 [2024-07-25 10:44:23.061360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.578 [2024-07-25 10:44:23.061378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.578 qpair failed and we were unable to recover it. 00:29:19.578 [2024-07-25 10:44:23.071340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.578 [2024-07-25 10:44:23.071471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.578 [2024-07-25 10:44:23.071490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.578 [2024-07-25 10:44:23.071500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.578 [2024-07-25 10:44:23.071509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.578 [2024-07-25 10:44:23.071526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.578 qpair failed and we were unable to recover it. 00:29:19.578 [2024-07-25 10:44:23.081322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.578 [2024-07-25 10:44:23.081407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.578 [2024-07-25 10:44:23.081424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.579 [2024-07-25 10:44:23.081434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.579 [2024-07-25 10:44:23.081442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.579 [2024-07-25 10:44:23.081459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.579 qpair failed and we were unable to recover it. 00:29:19.579 [2024-07-25 10:44:23.091382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.579 [2024-07-25 10:44:23.091472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.579 [2024-07-25 10:44:23.091490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.579 [2024-07-25 10:44:23.091500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.579 [2024-07-25 10:44:23.091509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.579 [2024-07-25 10:44:23.091526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.579 qpair failed and we were unable to recover it. 00:29:19.579 [2024-07-25 10:44:23.101332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.579 [2024-07-25 10:44:23.101410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.579 [2024-07-25 10:44:23.101428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.579 [2024-07-25 10:44:23.101437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.579 [2024-07-25 10:44:23.101449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.579 [2024-07-25 10:44:23.101467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.579 qpair failed and we were unable to recover it. 00:29:19.579 [2024-07-25 10:44:23.111442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.579 [2024-07-25 10:44:23.111520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.579 [2024-07-25 10:44:23.111539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.579 [2024-07-25 10:44:23.111549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.579 [2024-07-25 10:44:23.111557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.579 [2024-07-25 10:44:23.111575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.579 qpair failed and we were unable to recover it. 00:29:19.579 [2024-07-25 10:44:23.121494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.579 [2024-07-25 10:44:23.121574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.579 [2024-07-25 10:44:23.121593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.579 [2024-07-25 10:44:23.121602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.579 [2024-07-25 10:44:23.121611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.579 [2024-07-25 10:44:23.121628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.579 qpair failed and we were unable to recover it. 00:29:19.579 [2024-07-25 10:44:23.131524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.579 [2024-07-25 10:44:23.131604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.579 [2024-07-25 10:44:23.131622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.579 [2024-07-25 10:44:23.131631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.579 [2024-07-25 10:44:23.131640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.579 [2024-07-25 10:44:23.131657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.579 qpair failed and we were unable to recover it. 00:29:19.579 [2024-07-25 10:44:23.141547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.579 [2024-07-25 10:44:23.141627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.579 [2024-07-25 10:44:23.141645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.579 [2024-07-25 10:44:23.141655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.579 [2024-07-25 10:44:23.141664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.579 [2024-07-25 10:44:23.141681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.579 qpair failed and we were unable to recover it. 00:29:19.579 [2024-07-25 10:44:23.151508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.579 [2024-07-25 10:44:23.151592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.579 [2024-07-25 10:44:23.151610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.579 [2024-07-25 10:44:23.151619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.579 [2024-07-25 10:44:23.151627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.579 [2024-07-25 10:44:23.151645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.579 qpair failed and we were unable to recover it. 00:29:19.579 [2024-07-25 10:44:23.161582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.579 [2024-07-25 10:44:23.161662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.579 [2024-07-25 10:44:23.161680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.579 [2024-07-25 10:44:23.161689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.579 [2024-07-25 10:44:23.161697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.579 [2024-07-25 10:44:23.161729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.579 qpair failed and we were unable to recover it. 00:29:19.579 [2024-07-25 10:44:23.171609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.579 [2024-07-25 10:44:23.171688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.579 [2024-07-25 10:44:23.171706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.579 [2024-07-25 10:44:23.171720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.579 [2024-07-25 10:44:23.171729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.579 [2024-07-25 10:44:23.171746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.579 qpair failed and we were unable to recover it. 00:29:19.579 [2024-07-25 10:44:23.181635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.579 [2024-07-25 10:44:23.181710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.579 [2024-07-25 10:44:23.181732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.579 [2024-07-25 10:44:23.181742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.579 [2024-07-25 10:44:23.181751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.579 [2024-07-25 10:44:23.181768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.579 qpair failed and we were unable to recover it. 00:29:19.579 [2024-07-25 10:44:23.191673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.579 [2024-07-25 10:44:23.191846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.579 [2024-07-25 10:44:23.191865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.579 [2024-07-25 10:44:23.191875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.579 [2024-07-25 10:44:23.191886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.579 [2024-07-25 10:44:23.191904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.579 qpair failed and we were unable to recover it. 00:29:19.579 [2024-07-25 10:44:23.201682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.579 [2024-07-25 10:44:23.201770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.579 [2024-07-25 10:44:23.201789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.580 [2024-07-25 10:44:23.201799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.580 [2024-07-25 10:44:23.201807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.580 [2024-07-25 10:44:23.201825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.580 qpair failed and we were unable to recover it. 00:29:19.580 [2024-07-25 10:44:23.211739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.580 [2024-07-25 10:44:23.211820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.580 [2024-07-25 10:44:23.211838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.580 [2024-07-25 10:44:23.211848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.580 [2024-07-25 10:44:23.211856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.580 [2024-07-25 10:44:23.211874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.580 qpair failed and we were unable to recover it. 00:29:19.580 [2024-07-25 10:44:23.221741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.580 [2024-07-25 10:44:23.221831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.580 [2024-07-25 10:44:23.221849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.580 [2024-07-25 10:44:23.221859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.580 [2024-07-25 10:44:23.221868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.580 [2024-07-25 10:44:23.221885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.580 qpair failed and we were unable to recover it. 00:29:19.580 [2024-07-25 10:44:23.231785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.580 [2024-07-25 10:44:23.231863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.580 [2024-07-25 10:44:23.231881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.580 [2024-07-25 10:44:23.231890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.580 [2024-07-25 10:44:23.231899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.580 [2024-07-25 10:44:23.231916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.580 qpair failed and we were unable to recover it. 00:29:19.580 [2024-07-25 10:44:23.241762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.580 [2024-07-25 10:44:23.241844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.580 [2024-07-25 10:44:23.241862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.580 [2024-07-25 10:44:23.241871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.580 [2024-07-25 10:44:23.241880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.580 [2024-07-25 10:44:23.241897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.580 qpair failed and we were unable to recover it. 00:29:19.580 [2024-07-25 10:44:23.251921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.580 [2024-07-25 10:44:23.252001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.580 [2024-07-25 10:44:23.252019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.580 [2024-07-25 10:44:23.252028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.580 [2024-07-25 10:44:23.252037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.580 [2024-07-25 10:44:23.252055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.580 qpair failed and we were unable to recover it. 00:29:19.580 [2024-07-25 10:44:23.261817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.580 [2024-07-25 10:44:23.261895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.580 [2024-07-25 10:44:23.261913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.580 [2024-07-25 10:44:23.261923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.580 [2024-07-25 10:44:23.261931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.580 [2024-07-25 10:44:23.261948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.580 qpair failed and we were unable to recover it. 00:29:19.580 [2024-07-25 10:44:23.271907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.580 [2024-07-25 10:44:23.272010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.580 [2024-07-25 10:44:23.272027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.580 [2024-07-25 10:44:23.272037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.580 [2024-07-25 10:44:23.272045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.580 [2024-07-25 10:44:23.272062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.580 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.281858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.281945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.281963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.281975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.281984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.282001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.291961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.292043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.292061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.292070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.292079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.292096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.301932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.302020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.302038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.302047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.302056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.302073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.311973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.312102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.312121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.312130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.312139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.312155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.322020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.322105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.322123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.322132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.322140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.322158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.332020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.332150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.332169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.332179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.332187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.332205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.342136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.342211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.342230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.342240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.342249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.342266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.352171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.352253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.352270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.352280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.352288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.352306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.362168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.362253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.362271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.362281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.362290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.362307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.372202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.372367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.372386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.372399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.372407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.372424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.382158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.382239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.382258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.382270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.382280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.382298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.392190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.392352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.392371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.392381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.392389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.392407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.402253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.402334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.402351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.402361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.402369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.402386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.412293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.412369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.412387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.412396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.412405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.412423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.422313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.422393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.422411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.422421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.422429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.422446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.842 [2024-07-25 10:44:23.432381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.842 [2024-07-25 10:44:23.432465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.842 [2024-07-25 10:44:23.432483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.842 [2024-07-25 10:44:23.432493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.842 [2024-07-25 10:44:23.432501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.842 [2024-07-25 10:44:23.432518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.842 qpair failed and we were unable to recover it. 00:29:19.843 [2024-07-25 10:44:23.442394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.843 [2024-07-25 10:44:23.442476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.843 [2024-07-25 10:44:23.442493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.843 [2024-07-25 10:44:23.442503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.843 [2024-07-25 10:44:23.442511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.843 [2024-07-25 10:44:23.442528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.843 qpair failed and we were unable to recover it. 00:29:19.843 [2024-07-25 10:44:23.452439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.843 [2024-07-25 10:44:23.452515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.843 [2024-07-25 10:44:23.452533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.843 [2024-07-25 10:44:23.452542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.843 [2024-07-25 10:44:23.452550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.843 [2024-07-25 10:44:23.452567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.843 qpair failed and we were unable to recover it. 00:29:19.843 [2024-07-25 10:44:23.462461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.843 [2024-07-25 10:44:23.462539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.843 [2024-07-25 10:44:23.462557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.843 [2024-07-25 10:44:23.462569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.843 [2024-07-25 10:44:23.462577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.843 [2024-07-25 10:44:23.462595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.843 qpair failed and we were unable to recover it. 00:29:19.843 [2024-07-25 10:44:23.472496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.843 [2024-07-25 10:44:23.472580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.843 [2024-07-25 10:44:23.472598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.843 [2024-07-25 10:44:23.472607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.843 [2024-07-25 10:44:23.472616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.843 [2024-07-25 10:44:23.472634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.843 qpair failed and we were unable to recover it. 00:29:19.843 [2024-07-25 10:44:23.482520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.843 [2024-07-25 10:44:23.482607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.843 [2024-07-25 10:44:23.482624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.843 [2024-07-25 10:44:23.482634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.843 [2024-07-25 10:44:23.482643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.843 [2024-07-25 10:44:23.482661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.843 qpair failed and we were unable to recover it. 00:29:19.843 [2024-07-25 10:44:23.492599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.843 [2024-07-25 10:44:23.492683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.843 [2024-07-25 10:44:23.492701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.843 [2024-07-25 10:44:23.492711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.843 [2024-07-25 10:44:23.492725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.843 [2024-07-25 10:44:23.492742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.843 qpair failed and we were unable to recover it. 00:29:19.843 [2024-07-25 10:44:23.502589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.843 [2024-07-25 10:44:23.502668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.843 [2024-07-25 10:44:23.502685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.843 [2024-07-25 10:44:23.502695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.843 [2024-07-25 10:44:23.502704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.843 [2024-07-25 10:44:23.502724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.843 qpair failed and we were unable to recover it. 00:29:19.843 [2024-07-25 10:44:23.512605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.843 [2024-07-25 10:44:23.512684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.843 [2024-07-25 10:44:23.512701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.843 [2024-07-25 10:44:23.512710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.843 [2024-07-25 10:44:23.512722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.843 [2024-07-25 10:44:23.512739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.843 qpair failed and we were unable to recover it. 00:29:19.843 [2024-07-25 10:44:23.522651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.843 [2024-07-25 10:44:23.522764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.843 [2024-07-25 10:44:23.522783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.843 [2024-07-25 10:44:23.522793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.843 [2024-07-25 10:44:23.522801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.843 [2024-07-25 10:44:23.522818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.843 qpair failed and we were unable to recover it. 00:29:19.843 [2024-07-25 10:44:23.532663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.843 [2024-07-25 10:44:23.532743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.843 [2024-07-25 10:44:23.532761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.843 [2024-07-25 10:44:23.532770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.843 [2024-07-25 10:44:23.532779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:19.843 [2024-07-25 10:44:23.532796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.843 qpair failed and we were unable to recover it. 00:29:20.103 [2024-07-25 10:44:23.542686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.103 [2024-07-25 10:44:23.542775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.103 [2024-07-25 10:44:23.542792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.103 [2024-07-25 10:44:23.542802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.103 [2024-07-25 10:44:23.542811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.103 [2024-07-25 10:44:23.542828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-07-25 10:44:23.552722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.103 [2024-07-25 10:44:23.552801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.103 [2024-07-25 10:44:23.552821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.103 [2024-07-25 10:44:23.552831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.103 [2024-07-25 10:44:23.552839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.103 [2024-07-25 10:44:23.552856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-07-25 10:44:23.562752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.103 [2024-07-25 10:44:23.562834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.103 [2024-07-25 10:44:23.562852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.103 [2024-07-25 10:44:23.562861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.103 [2024-07-25 10:44:23.562870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.103 [2024-07-25 10:44:23.562887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-07-25 10:44:23.572776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.103 [2024-07-25 10:44:23.572858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.103 [2024-07-25 10:44:23.572876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.103 [2024-07-25 10:44:23.572886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.103 [2024-07-25 10:44:23.572894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.103 [2024-07-25 10:44:23.572912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-07-25 10:44:23.582848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.103 [2024-07-25 10:44:23.582959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.103 [2024-07-25 10:44:23.582977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.103 [2024-07-25 10:44:23.582987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.103 [2024-07-25 10:44:23.582996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.103 [2024-07-25 10:44:23.583013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-07-25 10:44:23.592850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.103 [2024-07-25 10:44:23.592931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.103 [2024-07-25 10:44:23.592948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.103 [2024-07-25 10:44:23.592958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.103 [2024-07-25 10:44:23.592966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.103 [2024-07-25 10:44:23.592983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-07-25 10:44:23.602813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.103 [2024-07-25 10:44:23.602895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.103 [2024-07-25 10:44:23.602912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.103 [2024-07-25 10:44:23.602921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.103 [2024-07-25 10:44:23.602929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.103 [2024-07-25 10:44:23.602946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-07-25 10:44:23.612906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.103 [2024-07-25 10:44:23.612990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.103 [2024-07-25 10:44:23.613008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.103 [2024-07-25 10:44:23.613017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.103 [2024-07-25 10:44:23.613025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.103 [2024-07-25 10:44:23.613042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-07-25 10:44:23.622928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.103 [2024-07-25 10:44:23.623009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.103 [2024-07-25 10:44:23.623027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.103 [2024-07-25 10:44:23.623036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.103 [2024-07-25 10:44:23.623045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.103 [2024-07-25 10:44:23.623063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-07-25 10:44:23.632945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.103 [2024-07-25 10:44:23.633040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.103 [2024-07-25 10:44:23.633058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.103 [2024-07-25 10:44:23.633068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.103 [2024-07-25 10:44:23.633077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.103 [2024-07-25 10:44:23.633094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-07-25 10:44:23.642985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.103 [2024-07-25 10:44:23.643066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.103 [2024-07-25 10:44:23.643086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.104 [2024-07-25 10:44:23.643097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.104 [2024-07-25 10:44:23.643105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.104 [2024-07-25 10:44:23.643123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-07-25 10:44:23.652954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.104 [2024-07-25 10:44:23.653034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.104 [2024-07-25 10:44:23.653052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.104 [2024-07-25 10:44:23.653061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.104 [2024-07-25 10:44:23.653070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.104 [2024-07-25 10:44:23.653087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-07-25 10:44:23.663032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.104 [2024-07-25 10:44:23.663115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.104 [2024-07-25 10:44:23.663134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.104 [2024-07-25 10:44:23.663144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.104 [2024-07-25 10:44:23.663153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.104 [2024-07-25 10:44:23.663170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-07-25 10:44:23.673075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.104 [2024-07-25 10:44:23.673154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.104 [2024-07-25 10:44:23.673172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.104 [2024-07-25 10:44:23.673181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.104 [2024-07-25 10:44:23.673190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.104 [2024-07-25 10:44:23.673207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-07-25 10:44:23.683094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.104 [2024-07-25 10:44:23.683178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.104 [2024-07-25 10:44:23.683195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.104 [2024-07-25 10:44:23.683204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.104 [2024-07-25 10:44:23.683213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.104 [2024-07-25 10:44:23.683233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-07-25 10:44:23.693113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.104 [2024-07-25 10:44:23.693190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.104 [2024-07-25 10:44:23.693207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.104 [2024-07-25 10:44:23.693217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.104 [2024-07-25 10:44:23.693225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.104 [2024-07-25 10:44:23.693242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-07-25 10:44:23.703143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.104 [2024-07-25 10:44:23.703216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.104 [2024-07-25 10:44:23.703234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.104 [2024-07-25 10:44:23.703243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.104 [2024-07-25 10:44:23.703252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.104 [2024-07-25 10:44:23.703269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-07-25 10:44:23.713166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.104 [2024-07-25 10:44:23.713244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.104 [2024-07-25 10:44:23.713261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.104 [2024-07-25 10:44:23.713271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.104 [2024-07-25 10:44:23.713279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.104 [2024-07-25 10:44:23.713296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-07-25 10:44:23.723199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.104 [2024-07-25 10:44:23.723283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.104 [2024-07-25 10:44:23.723301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.104 [2024-07-25 10:44:23.723311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.104 [2024-07-25 10:44:23.723319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.104 [2024-07-25 10:44:23.723336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-07-25 10:44:23.733221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.104 [2024-07-25 10:44:23.733300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.104 [2024-07-25 10:44:23.733321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.104 [2024-07-25 10:44:23.733330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.104 [2024-07-25 10:44:23.733339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.104 [2024-07-25 10:44:23.733356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-07-25 10:44:23.743248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.104 [2024-07-25 10:44:23.743329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.104 [2024-07-25 10:44:23.743347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.104 [2024-07-25 10:44:23.743356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.104 [2024-07-25 10:44:23.743364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.104 [2024-07-25 10:44:23.743381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-07-25 10:44:23.753277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.104 [2024-07-25 10:44:23.753355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.104 [2024-07-25 10:44:23.753372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.104 [2024-07-25 10:44:23.753382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.104 [2024-07-25 10:44:23.753390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.104 [2024-07-25 10:44:23.753407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-07-25 10:44:23.763297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.104 [2024-07-25 10:44:23.763423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.105 [2024-07-25 10:44:23.763442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.105 [2024-07-25 10:44:23.763452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.105 [2024-07-25 10:44:23.763460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.105 [2024-07-25 10:44:23.763478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-07-25 10:44:23.773328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.105 [2024-07-25 10:44:23.773406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.105 [2024-07-25 10:44:23.773424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.105 [2024-07-25 10:44:23.773433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.105 [2024-07-25 10:44:23.773442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.105 [2024-07-25 10:44:23.773461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-07-25 10:44:23.783304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.105 [2024-07-25 10:44:23.783383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.105 [2024-07-25 10:44:23.783401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.105 [2024-07-25 10:44:23.783410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.105 [2024-07-25 10:44:23.783419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.105 [2024-07-25 10:44:23.783436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-07-25 10:44:23.793345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.105 [2024-07-25 10:44:23.793426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.105 [2024-07-25 10:44:23.793444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.105 [2024-07-25 10:44:23.793453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.105 [2024-07-25 10:44:23.793462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.105 [2024-07-25 10:44:23.793478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-07-25 10:44:23.803400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.105 [2024-07-25 10:44:23.803481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.105 [2024-07-25 10:44:23.803498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.105 [2024-07-25 10:44:23.803507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.105 [2024-07-25 10:44:23.803516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.105 [2024-07-25 10:44:23.803533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.364 [2024-07-25 10:44:23.813400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.365 [2024-07-25 10:44:23.813502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.365 [2024-07-25 10:44:23.813520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.365 [2024-07-25 10:44:23.813530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.365 [2024-07-25 10:44:23.813539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.365 [2024-07-25 10:44:23.813555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.365 qpair failed and we were unable to recover it. 00:29:20.365 [2024-07-25 10:44:23.823511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.365 [2024-07-25 10:44:23.823616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.365 [2024-07-25 10:44:23.823641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.365 [2024-07-25 10:44:23.823651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.365 [2024-07-25 10:44:23.823660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.365 [2024-07-25 10:44:23.823677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.365 qpair failed and we were unable to recover it. 00:29:20.365 [2024-07-25 10:44:23.833499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.365 [2024-07-25 10:44:23.833580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.365 [2024-07-25 10:44:23.833598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.365 [2024-07-25 10:44:23.833607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.365 [2024-07-25 10:44:23.833616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.365 [2024-07-25 10:44:23.833633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.365 qpair failed and we were unable to recover it. 00:29:20.365 [2024-07-25 10:44:23.843523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.365 [2024-07-25 10:44:23.843605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.365 [2024-07-25 10:44:23.843623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.365 [2024-07-25 10:44:23.843632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.365 [2024-07-25 10:44:23.843641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.365 [2024-07-25 10:44:23.843658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.365 qpair failed and we were unable to recover it. 00:29:20.365 [2024-07-25 10:44:23.853559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.365 [2024-07-25 10:44:23.853684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.365 [2024-07-25 10:44:23.853702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.365 [2024-07-25 10:44:23.853711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.365 [2024-07-25 10:44:23.853724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.365 [2024-07-25 10:44:23.853741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.365 qpair failed and we were unable to recover it. 00:29:20.365 [2024-07-25 10:44:23.863612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.365 [2024-07-25 10:44:23.863788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.365 [2024-07-25 10:44:23.863807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.365 [2024-07-25 10:44:23.863816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.365 [2024-07-25 10:44:23.863828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.365 [2024-07-25 10:44:23.863845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.365 qpair failed and we were unable to recover it. 00:29:20.365 [2024-07-25 10:44:23.873628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.365 [2024-07-25 10:44:23.873711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.365 [2024-07-25 10:44:23.873733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.365 [2024-07-25 10:44:23.873743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.365 [2024-07-25 10:44:23.873751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.365 [2024-07-25 10:44:23.873768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.365 qpair failed and we were unable to recover it. 00:29:20.365 [2024-07-25 10:44:23.883619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.365 [2024-07-25 10:44:23.883708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.365 [2024-07-25 10:44:23.883730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.365 [2024-07-25 10:44:23.883739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.365 [2024-07-25 10:44:23.883748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.365 [2024-07-25 10:44:23.883765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.365 qpair failed and we were unable to recover it. 00:29:20.365 [2024-07-25 10:44:23.893674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.365 [2024-07-25 10:44:23.893754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.365 [2024-07-25 10:44:23.893771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.365 [2024-07-25 10:44:23.893780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.365 [2024-07-25 10:44:23.893788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.365 [2024-07-25 10:44:23.893806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.365 qpair failed and we were unable to recover it. 00:29:20.365 [2024-07-25 10:44:23.903713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.365 [2024-07-25 10:44:23.903802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.365 [2024-07-25 10:44:23.903821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.365 [2024-07-25 10:44:23.903831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.365 [2024-07-25 10:44:23.903839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.365 [2024-07-25 10:44:23.903857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.365 qpair failed and we were unable to recover it. 00:29:20.365 [2024-07-25 10:44:23.913738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.365 [2024-07-25 10:44:23.913825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.365 [2024-07-25 10:44:23.913843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.365 [2024-07-25 10:44:23.913852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.365 [2024-07-25 10:44:23.913861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.365 [2024-07-25 10:44:23.913878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.365 qpair failed and we were unable to recover it. 00:29:20.365 [2024-07-25 10:44:23.923739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.365 [2024-07-25 10:44:23.923822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.365 [2024-07-25 10:44:23.923840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.365 [2024-07-25 10:44:23.923850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.365 [2024-07-25 10:44:23.923858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.365 [2024-07-25 10:44:23.923875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.365 qpair failed and we were unable to recover it. 00:29:20.365 [2024-07-25 10:44:23.933808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.366 [2024-07-25 10:44:23.933923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.366 [2024-07-25 10:44:23.933941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.366 [2024-07-25 10:44:23.933951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.366 [2024-07-25 10:44:23.933959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.366 [2024-07-25 10:44:23.933977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.366 qpair failed and we were unable to recover it. 00:29:20.366 [2024-07-25 10:44:23.943806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.366 [2024-07-25 10:44:23.943887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.366 [2024-07-25 10:44:23.943905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.366 [2024-07-25 10:44:23.943914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.366 [2024-07-25 10:44:23.943923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.366 [2024-07-25 10:44:23.943940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.366 qpair failed and we were unable to recover it. 00:29:20.366 [2024-07-25 10:44:23.953845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.366 [2024-07-25 10:44:23.953927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.366 [2024-07-25 10:44:23.953945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.366 [2024-07-25 10:44:23.953954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.366 [2024-07-25 10:44:23.953966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.366 [2024-07-25 10:44:23.953984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.366 qpair failed and we were unable to recover it. 00:29:20.366 [2024-07-25 10:44:23.963872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.366 [2024-07-25 10:44:23.963952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.366 [2024-07-25 10:44:23.963970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.366 [2024-07-25 10:44:23.963979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.366 [2024-07-25 10:44:23.963988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.366 [2024-07-25 10:44:23.964005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.366 qpair failed and we were unable to recover it. 00:29:20.366 [2024-07-25 10:44:23.973903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.366 [2024-07-25 10:44:23.973978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.366 [2024-07-25 10:44:23.973995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.366 [2024-07-25 10:44:23.974005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.366 [2024-07-25 10:44:23.974013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.366 [2024-07-25 10:44:23.974031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.366 qpair failed and we were unable to recover it. 00:29:20.366 [2024-07-25 10:44:23.983961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.366 [2024-07-25 10:44:23.984162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.366 [2024-07-25 10:44:23.984181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.366 [2024-07-25 10:44:23.984190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.366 [2024-07-25 10:44:23.984199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.366 [2024-07-25 10:44:23.984217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.366 qpair failed and we were unable to recover it. 00:29:20.366 [2024-07-25 10:44:23.993963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.366 [2024-07-25 10:44:23.994044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.366 [2024-07-25 10:44:23.994061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.366 [2024-07-25 10:44:23.994070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.366 [2024-07-25 10:44:23.994078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.366 [2024-07-25 10:44:23.994096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.366 qpair failed and we were unable to recover it. 00:29:20.366 [2024-07-25 10:44:24.004010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.366 [2024-07-25 10:44:24.004094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.366 [2024-07-25 10:44:24.004112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.366 [2024-07-25 10:44:24.004121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.366 [2024-07-25 10:44:24.004129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.366 [2024-07-25 10:44:24.004147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.366 qpair failed and we were unable to recover it. 00:29:20.366 [2024-07-25 10:44:24.014024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.366 [2024-07-25 10:44:24.014118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.366 [2024-07-25 10:44:24.014136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.366 [2024-07-25 10:44:24.014145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.366 [2024-07-25 10:44:24.014153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.366 [2024-07-25 10:44:24.014170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.366 qpair failed and we were unable to recover it. 00:29:20.366 [2024-07-25 10:44:24.024025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.366 [2024-07-25 10:44:24.024131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.366 [2024-07-25 10:44:24.024152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.366 [2024-07-25 10:44:24.024163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.366 [2024-07-25 10:44:24.024173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdd1a0 00:29:20.366 [2024-07-25 10:44:24.024190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.366 qpair failed and we were unable to recover it. 00:29:20.366 [2024-07-25 10:44:24.024334] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:20.366 A controller has encountered a failure and is being reset. 00:29:20.366 [2024-07-25 10:44:24.024438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1beb210 (9): Bad file descriptor 00:29:20.625 Controller properly reset. 00:29:20.625 Initializing NVMe Controllers 00:29:20.625 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:20.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:20.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:20.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:20.625 Initialization complete. Launching workers. 00:29:20.625 Starting thread on core 1 00:29:20.625 Starting thread on core 2 00:29:20.625 Starting thread on core 3 00:29:20.625 Starting thread on core 0 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:20.625 00:29:20.625 real 0m11.548s 00:29:20.625 user 0m20.454s 00:29:20.625 sys 0m4.949s 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.625 ************************************ 00:29:20.625 END TEST nvmf_target_disconnect_tc2 00:29:20.625 ************************************ 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:20.625 rmmod nvme_tcp 00:29:20.625 rmmod nvme_fabrics 00:29:20.625 rmmod nvme_keyring 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 4056133 ']' 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 4056133 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 4056133 ']' 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 4056133 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:20.625 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4056133 00:29:20.884 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:20.884 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:20.884 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4056133' 00:29:20.884 killing process with pid 4056133 00:29:20.884 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 4056133 00:29:20.884 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 4056133 00:29:20.884 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:20.884 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:20.884 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:20.884 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:20.884 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:20.884 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.884 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.884 10:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.419 10:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:23.419 00:29:23.419 real 0m20.543s 00:29:23.419 user 0m48.606s 00:29:23.419 sys 0m10.250s 00:29:23.419 10:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:23.419 10:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:23.419 ************************************ 00:29:23.419 END TEST nvmf_target_disconnect 00:29:23.419 ************************************ 00:29:23.419 10:44:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:23.419 00:29:23.419 real 6m12.051s 00:29:23.419 user 10m53.097s 00:29:23.419 sys 2m16.414s 00:29:23.419 10:44:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:23.419 10:44:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.419 ************************************ 00:29:23.419 END TEST nvmf_host 00:29:23.419 ************************************ 00:29:23.419 00:29:23.419 real 22m18.059s 00:29:23.419 user 45m23.441s 00:29:23.419 sys 8m14.090s 00:29:23.419 10:44:26 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:23.419 10:44:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:23.419 ************************************ 00:29:23.419 END TEST nvmf_tcp 00:29:23.419 ************************************ 00:29:23.419 10:44:26 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:29:23.419 10:44:26 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:23.419 10:44:26 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:23.419 10:44:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:23.419 10:44:26 -- common/autotest_common.sh@10 -- # set +x 00:29:23.419 ************************************ 00:29:23.419 START TEST spdkcli_nvmf_tcp 00:29:23.419 ************************************ 00:29:23.419 10:44:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:23.419 * Looking for test storage... 00:29:23.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4057858 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 4057858 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 4057858 ']' 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:23.420 10:44:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:23.420 [2024-07-25 10:44:26.968199] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:29:23.420 [2024-07-25 10:44:26.968254] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4057858 ] 00:29:23.420 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.420 [2024-07-25 10:44:27.036976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:23.420 [2024-07-25 10:44:27.111571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.420 [2024-07-25 10:44:27.111574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.355 10:44:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.356 10:44:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:29:24.356 10:44:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:24.356 10:44:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.356 10:44:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:24.356 10:44:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:24.356 10:44:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:24.356 10:44:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:24.356 10:44:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:24.356 10:44:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:24.356 10:44:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:24.356 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:24.356 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:24.356 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:24.356 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:24.356 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:24.356 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:24.356 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:24.356 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:24.356 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:24.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:24.356 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:24.356 ' 00:29:26.888 [2024-07-25 10:44:30.188975] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.824 [2024-07-25 10:44:31.364860] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:30.362 [2024-07-25 10:44:33.527235] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:31.737 [2024-07-25 10:44:35.384903] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:33.113 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:33.113 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:33.113 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:33.113 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:33.113 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:33.113 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:33.113 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:33.113 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:33.113 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:33.113 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:33.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:33.113 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:33.372 10:44:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:33.372 10:44:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:33.372 10:44:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:33.372 10:44:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:33.372 10:44:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:33.372 10:44:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:33.372 10:44:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:33.372 10:44:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:33.630 10:44:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:33.890 10:44:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:33.890 10:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:33.890 10:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:33.890 10:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:33.890 10:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:33.890 10:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:33.890 10:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:33.890 10:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:33.890 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:33.890 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:33.890 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:33.890 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:33.890 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:33.890 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:33.890 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:33.890 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:33.890 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:33.890 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:33.890 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:33.890 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:33.890 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:33.890 ' 00:29:39.161 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:39.161 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:39.161 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:39.161 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:39.161 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:39.161 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:39.161 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:39.161 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:39.161 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:39.161 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:39.161 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:39.161 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:39.161 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:39.161 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:39.161 10:44:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:39.161 10:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:39.161 10:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.161 10:44:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 4057858 00:29:39.161 10:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 4057858 ']' 00:29:39.162 10:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 4057858 00:29:39.162 10:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:29:39.162 10:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:39.162 10:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4057858 00:29:39.421 10:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:39.421 10:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:39.421 10:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4057858' 00:29:39.421 killing process with pid 4057858 00:29:39.421 10:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 4057858 00:29:39.421 10:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 4057858 00:29:39.421 10:44:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:39.421 10:44:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:39.421 10:44:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 4057858 ']' 00:29:39.421 10:44:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 4057858 00:29:39.421 10:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 4057858 ']' 00:29:39.421 10:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 4057858 00:29:39.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (4057858) - No such process 00:29:39.421 10:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 4057858 is not found' 00:29:39.421 Process with pid 4057858 is not found 00:29:39.421 10:44:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:39.421 10:44:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:39.421 10:44:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:39.421 00:29:39.421 real 0m16.255s 00:29:39.421 user 0m34.109s 00:29:39.421 sys 0m0.936s 00:29:39.421 10:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:39.421 10:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.421 ************************************ 00:29:39.421 END TEST spdkcli_nvmf_tcp 00:29:39.421 ************************************ 00:29:39.421 10:44:43 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:39.421 10:44:43 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:39.421 10:44:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:39.421 10:44:43 -- common/autotest_common.sh@10 -- # set +x 00:29:39.680 ************************************ 00:29:39.680 START TEST nvmf_identify_passthru 00:29:39.680 ************************************ 00:29:39.680 10:44:43 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:39.680 * Looking for test storage... 00:29:39.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:39.680 10:44:43 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.680 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:39.680 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.680 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.680 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.680 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.680 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.680 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.680 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.680 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.680 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.680 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.680 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:39.680 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:39.680 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.680 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.681 10:44:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.681 10:44:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.681 10:44:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.681 10:44:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.681 10:44:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.681 10:44:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.681 10:44:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:39.681 10:44:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:39.681 10:44:43 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.681 10:44:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.681 10:44:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.681 10:44:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.681 10:44:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.681 10:44:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.681 10:44:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.681 10:44:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:39.681 10:44:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.681 10:44:43 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.681 10:44:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:39.681 10:44:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:39.681 10:44:43 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:39.681 10:44:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:46.282 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:46.282 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:46.282 Found net devices under 0000:af:00.0: cvl_0_0 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:46.282 Found net devices under 0000:af:00.1: cvl_0_1 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:46.282 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:46.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:29:46.283 00:29:46.283 --- 10.0.0.2 ping statistics --- 00:29:46.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.283 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:46.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:29:46.283 00:29:46.283 --- 10.0.0.1 ping statistics --- 00:29:46.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.283 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:46.283 10:44:49 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:46.283 10:44:49 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:46.283 10:44:49 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:46.283 10:44:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.283 10:44:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:46.283 10:44:49 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:46.283 10:44:49 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:29:46.283 10:44:49 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:46.283 10:44:49 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:46.283 10:44:49 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:46.283 10:44:49 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:29:46.283 10:44:49 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:46.283 10:44:49 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:46.283 10:44:49 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:46.283 10:44:49 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:46.283 10:44:49 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:29:46.283 10:44:49 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:d8:00.0 00:29:46.283 10:44:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:29:46.283 10:44:49 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:29:46.283 10:44:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:46.283 10:44:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:29:46.283 10:44:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:46.283 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.555 10:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:29:51.555 10:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:29:51.555 10:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:51.555 10:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:51.555 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.827 10:44:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:29:56.827 10:44:59 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:56.827 10:44:59 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:56.827 10:44:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.827 10:44:59 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:56.827 10:44:59 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:56.827 10:44:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.827 10:44:59 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=4065295 00:29:56.827 10:44:59 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:56.827 10:44:59 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:56.827 10:44:59 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 4065295 00:29:56.827 10:44:59 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 4065295 ']' 00:29:56.827 10:44:59 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.827 10:44:59 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:56.827 10:44:59 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.827 10:44:59 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:56.827 10:44:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.827 [2024-07-25 10:44:59.553671] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:29:56.827 [2024-07-25 10:44:59.553730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.827 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.827 [2024-07-25 10:44:59.626329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:56.827 [2024-07-25 10:44:59.695929] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.827 [2024-07-25 10:44:59.695971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.827 [2024-07-25 10:44:59.695980] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.827 [2024-07-25 10:44:59.695988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.827 [2024-07-25 10:44:59.695995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.827 [2024-07-25 10:44:59.696073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.827 [2024-07-25 10:44:59.696170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:56.827 [2024-07-25 10:44:59.696255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:56.827 [2024-07-25 10:44:59.696256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.827 10:45:00 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:56.827 10:45:00 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:29:56.827 10:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:56.827 10:45:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.827 10:45:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.827 INFO: Log level set to 20 00:29:56.827 INFO: Requests: 00:29:56.827 { 00:29:56.827 "jsonrpc": "2.0", 00:29:56.827 "method": "nvmf_set_config", 00:29:56.827 "id": 1, 00:29:56.827 "params": { 00:29:56.827 "admin_cmd_passthru": { 00:29:56.827 "identify_ctrlr": true 00:29:56.827 } 00:29:56.827 } 00:29:56.827 } 00:29:56.827 00:29:56.827 INFO: response: 00:29:56.827 { 00:29:56.827 "jsonrpc": "2.0", 00:29:56.827 "id": 1, 00:29:56.827 "result": true 00:29:56.827 } 00:29:56.827 00:29:56.827 10:45:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.827 10:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:56.827 10:45:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.827 10:45:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.827 INFO: Setting log level to 20 00:29:56.827 INFO: Setting log level to 20 00:29:56.827 INFO: Log level set to 20 00:29:56.827 INFO: Log level set to 20 00:29:56.827 INFO: Requests: 00:29:56.827 { 00:29:56.827 "jsonrpc": "2.0", 00:29:56.827 "method": "framework_start_init", 00:29:56.827 "id": 1 00:29:56.827 } 00:29:56.827 00:29:56.827 INFO: Requests: 00:29:56.827 { 00:29:56.827 "jsonrpc": "2.0", 00:29:56.827 "method": "framework_start_init", 00:29:56.827 "id": 1 00:29:56.827 } 00:29:56.827 00:29:56.827 [2024-07-25 10:45:00.460606] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:56.827 INFO: response: 00:29:56.827 { 00:29:56.827 "jsonrpc": "2.0", 00:29:56.827 "id": 1, 00:29:56.827 "result": true 00:29:56.827 } 00:29:56.827 00:29:56.827 INFO: response: 00:29:56.827 { 00:29:56.827 "jsonrpc": "2.0", 00:29:56.827 "id": 1, 00:29:56.827 "result": true 00:29:56.827 } 00:29:56.827 00:29:56.827 10:45:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.827 10:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:56.827 10:45:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.827 10:45:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.827 INFO: Setting log level to 40 00:29:56.827 INFO: Setting log level to 40 00:29:56.827 INFO: Setting log level to 40 00:29:56.827 [2024-07-25 10:45:00.473987] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.827 10:45:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.827 10:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:56.827 10:45:00 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:56.827 10:45:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:57.086 10:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:29:57.086 10:45:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.086 10:45:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.373 Nvme0n1 00:30:00.373 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.373 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:00.373 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.373 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.373 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.373 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:00.373 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.373 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.373 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.373 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:00.373 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.373 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.374 [2024-07-25 10:45:03.397593] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.374 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.374 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:00.374 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.374 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.374 [ 00:30:00.374 { 00:30:00.374 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:00.374 "subtype": "Discovery", 00:30:00.374 "listen_addresses": [], 00:30:00.374 "allow_any_host": true, 00:30:00.374 "hosts": [] 00:30:00.374 }, 00:30:00.374 { 00:30:00.374 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:00.374 "subtype": "NVMe", 00:30:00.374 "listen_addresses": [ 00:30:00.374 { 00:30:00.374 "trtype": "TCP", 00:30:00.374 "adrfam": "IPv4", 00:30:00.374 "traddr": "10.0.0.2", 00:30:00.374 "trsvcid": "4420" 00:30:00.374 } 00:30:00.374 ], 00:30:00.374 "allow_any_host": true, 00:30:00.374 "hosts": [], 00:30:00.374 "serial_number": "SPDK00000000000001", 00:30:00.374 "model_number": "SPDK bdev Controller", 00:30:00.374 "max_namespaces": 1, 00:30:00.374 "min_cntlid": 1, 00:30:00.374 "max_cntlid": 65519, 00:30:00.374 "namespaces": [ 00:30:00.374 { 00:30:00.374 "nsid": 1, 00:30:00.374 "bdev_name": "Nvme0n1", 00:30:00.374 "name": "Nvme0n1", 00:30:00.374 "nguid": "98E59ABDF300496F85ABF37D81728E4B", 00:30:00.374 "uuid": "98e59abd-f300-496f-85ab-f37d81728e4b" 00:30:00.374 } 00:30:00.374 ] 00:30:00.374 } 00:30:00.374 ] 00:30:00.374 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.374 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:00.374 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:00.374 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:00.374 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.374 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:30:00.374 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:00.374 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:00.374 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:00.374 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.374 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:00.374 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:30:00.374 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:00.374 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:00.374 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.374 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.374 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.374 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:00.374 10:45:03 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:00.374 10:45:03 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:00.374 10:45:03 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:00.374 10:45:03 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:00.374 10:45:03 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:00.374 10:45:03 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:00.374 10:45:03 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:00.374 rmmod nvme_tcp 00:30:00.374 rmmod nvme_fabrics 00:30:00.374 rmmod nvme_keyring 00:30:00.374 10:45:03 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:00.374 10:45:03 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:00.374 10:45:03 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:00.374 10:45:03 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 4065295 ']' 00:30:00.374 10:45:03 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 4065295 00:30:00.374 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 4065295 ']' 00:30:00.374 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 4065295 00:30:00.374 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:30:00.374 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:00.374 10:45:03 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4065295 00:30:00.374 10:45:04 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:00.374 10:45:04 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:00.374 10:45:04 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4065295' 00:30:00.374 killing process with pid 4065295 00:30:00.374 10:45:04 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 4065295 00:30:00.374 10:45:04 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 4065295 00:30:02.909 10:45:06 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:02.909 10:45:06 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:02.909 10:45:06 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:02.909 10:45:06 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:02.909 10:45:06 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:02.909 10:45:06 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.909 10:45:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:02.909 10:45:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.813 10:45:08 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:04.813 00:30:04.813 real 0m25.026s 00:30:04.813 user 0m33.919s 00:30:04.813 sys 0m6.313s 00:30:04.813 10:45:08 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:04.813 10:45:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:04.813 ************************************ 00:30:04.813 END TEST nvmf_identify_passthru 00:30:04.813 ************************************ 00:30:04.813 10:45:08 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:04.813 10:45:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:04.813 10:45:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:04.813 10:45:08 -- common/autotest_common.sh@10 -- # set +x 00:30:04.813 ************************************ 00:30:04.813 START TEST nvmf_dif 00:30:04.813 ************************************ 00:30:04.813 10:45:08 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:04.813 * Looking for test storage... 00:30:04.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:04.813 10:45:08 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.813 10:45:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:04.813 10:45:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.813 10:45:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.813 10:45:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.813 10:45:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.813 10:45:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.813 10:45:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.813 10:45:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.813 10:45:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.814 10:45:08 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.814 10:45:08 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.814 10:45:08 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.814 10:45:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.814 10:45:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.814 10:45:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.814 10:45:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:04.814 10:45:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:04.814 10:45:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:04.814 10:45:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:04.814 10:45:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:04.814 10:45:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:04.814 10:45:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.814 10:45:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:04.814 10:45:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:04.814 10:45:08 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:04.814 10:45:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:11.410 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:11.410 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:11.410 Found net devices under 0000:af:00.0: cvl_0_0 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:11.410 Found net devices under 0000:af:00.1: cvl_0_1 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:11.410 10:45:14 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.410 10:45:15 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.410 10:45:15 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.410 10:45:15 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:11.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:30:11.410 00:30:11.410 --- 10.0.0.2 ping statistics --- 00:30:11.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.410 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:30:11.410 10:45:15 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:30:11.411 00:30:11.411 --- 10.0.0.1 ping statistics --- 00:30:11.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.411 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:30:11.411 10:45:15 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.411 10:45:15 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:11.411 10:45:15 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:11.411 10:45:15 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:14.696 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:14.696 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:14.696 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:14.696 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:14.696 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:14.696 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:14.697 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:14.697 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:14.697 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:14.697 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:14.697 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:14.697 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:14.697 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:14.697 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:14.697 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:14.697 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:14.697 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:14.697 10:45:17 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.697 10:45:17 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:14.697 10:45:17 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:14.697 10:45:17 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.697 10:45:17 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:14.697 10:45:17 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:14.697 10:45:17 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:14.697 10:45:17 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:14.697 10:45:17 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:14.697 10:45:17 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:14.697 10:45:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:14.697 10:45:17 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=4071830 00:30:14.697 10:45:17 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 4071830 00:30:14.697 10:45:17 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 4071830 ']' 00:30:14.697 10:45:17 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.697 10:45:17 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:14.697 10:45:17 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.697 10:45:17 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:14.697 10:45:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:14.697 10:45:17 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:14.697 [2024-07-25 10:45:17.940461] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:30:14.697 [2024-07-25 10:45:17.940505] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.697 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.697 [2024-07-25 10:45:18.014140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.697 [2024-07-25 10:45:18.081020] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.697 [2024-07-25 10:45:18.081060] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.697 [2024-07-25 10:45:18.081069] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.697 [2024-07-25 10:45:18.081077] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.697 [2024-07-25 10:45:18.081084] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.697 [2024-07-25 10:45:18.081104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.263 10:45:18 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:15.263 10:45:18 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:30:15.263 10:45:18 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:15.263 10:45:18 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:15.263 10:45:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:15.263 10:45:18 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.263 10:45:18 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:15.263 10:45:18 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:15.263 10:45:18 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.263 10:45:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:15.263 [2024-07-25 10:45:18.751253] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.263 10:45:18 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.263 10:45:18 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:15.263 10:45:18 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:15.263 10:45:18 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:15.263 10:45:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:15.263 ************************************ 00:30:15.263 START TEST fio_dif_1_default 00:30:15.263 ************************************ 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:15.263 bdev_null0 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:15.263 [2024-07-25 10:45:18.819551] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.263 10:45:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.263 { 00:30:15.263 "params": { 00:30:15.263 "name": "Nvme$subsystem", 00:30:15.263 "trtype": "$TEST_TRANSPORT", 00:30:15.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.263 "adrfam": "ipv4", 00:30:15.263 "trsvcid": "$NVMF_PORT", 00:30:15.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.263 "hdgst": ${hdgst:-false}, 00:30:15.263 "ddgst": ${ddgst:-false} 00:30:15.263 }, 00:30:15.263 "method": "bdev_nvme_attach_controller" 00:30:15.263 } 00:30:15.263 EOF 00:30:15.263 )") 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:15.264 "params": { 00:30:15.264 "name": "Nvme0", 00:30:15.264 "trtype": "tcp", 00:30:15.264 "traddr": "10.0.0.2", 00:30:15.264 "adrfam": "ipv4", 00:30:15.264 "trsvcid": "4420", 00:30:15.264 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:15.264 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:15.264 "hdgst": false, 00:30:15.264 "ddgst": false 00:30:15.264 }, 00:30:15.264 "method": "bdev_nvme_attach_controller" 00:30:15.264 }' 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:15.264 10:45:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:15.522 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:15.523 fio-3.35 00:30:15.523 Starting 1 thread 00:30:15.781 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.984 00:30:27.984 filename0: (groupid=0, jobs=1): err= 0: pid=4072287: Thu Jul 25 10:45:29 2024 00:30:27.984 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10007msec) 00:30:27.985 slat (nsec): min=5577, max=63835, avg=5946.06, stdev=2313.68 00:30:27.985 clat (usec): min=40886, max=43434, avg=41333.45, stdev=482.80 00:30:27.985 lat (usec): min=40892, max=43465, avg=41339.40, stdev=483.06 00:30:27.985 clat percentiles (usec): 00:30:27.985 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:27.985 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:27.985 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:27.985 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:30:27.985 | 99.99th=[43254] 00:30:27.985 bw ( KiB/s): min= 384, max= 416, per=99.50%, avg=385.68, stdev= 7.34, samples=19 00:30:27.985 iops : min= 96, max= 104, avg=96.42, stdev= 1.84, samples=19 00:30:27.985 lat (msec) : 50=100.00% 00:30:27.985 cpu : usr=85.39%, sys=14.38%, ctx=13, majf=0, minf=239 00:30:27.985 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:27.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.985 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.985 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:27.985 00:30:27.985 Run status group 0 (all jobs): 00:30:27.985 READ: bw=387KiB/s (396kB/s), 387KiB/s-387KiB/s (396kB/s-396kB/s), io=3872KiB (3965kB), run=10007-10007msec 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.985 00:30:27.985 real 0m11.291s 00:30:27.985 user 0m17.036s 00:30:27.985 sys 0m1.835s 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:27.985 ************************************ 00:30:27.985 END TEST fio_dif_1_default 00:30:27.985 ************************************ 00:30:27.985 10:45:30 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:27.985 10:45:30 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:27.985 10:45:30 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:27.985 10:45:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:27.985 ************************************ 00:30:27.985 START TEST fio_dif_1_multi_subsystems 00:30:27.985 ************************************ 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:27.985 bdev_null0 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:27.985 [2024-07-25 10:45:30.205149] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:27.985 bdev_null1 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.985 { 00:30:27.985 "params": { 00:30:27.985 "name": "Nvme$subsystem", 00:30:27.985 "trtype": "$TEST_TRANSPORT", 00:30:27.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.985 "adrfam": "ipv4", 00:30:27.985 "trsvcid": "$NVMF_PORT", 00:30:27.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.985 "hdgst": ${hdgst:-false}, 00:30:27.985 "ddgst": ${ddgst:-false} 00:30:27.985 }, 00:30:27.985 "method": "bdev_nvme_attach_controller" 00:30:27.985 } 00:30:27.985 EOF 00:30:27.985 )") 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:27.985 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.986 { 00:30:27.986 "params": { 00:30:27.986 "name": "Nvme$subsystem", 00:30:27.986 "trtype": "$TEST_TRANSPORT", 00:30:27.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.986 "adrfam": "ipv4", 00:30:27.986 "trsvcid": "$NVMF_PORT", 00:30:27.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.986 "hdgst": ${hdgst:-false}, 00:30:27.986 "ddgst": ${ddgst:-false} 00:30:27.986 }, 00:30:27.986 "method": "bdev_nvme_attach_controller" 00:30:27.986 } 00:30:27.986 EOF 00:30:27.986 )") 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:27.986 "params": { 00:30:27.986 "name": "Nvme0", 00:30:27.986 "trtype": "tcp", 00:30:27.986 "traddr": "10.0.0.2", 00:30:27.986 "adrfam": "ipv4", 00:30:27.986 "trsvcid": "4420", 00:30:27.986 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:27.986 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:27.986 "hdgst": false, 00:30:27.986 "ddgst": false 00:30:27.986 }, 00:30:27.986 "method": "bdev_nvme_attach_controller" 00:30:27.986 },{ 00:30:27.986 "params": { 00:30:27.986 "name": "Nvme1", 00:30:27.986 "trtype": "tcp", 00:30:27.986 "traddr": "10.0.0.2", 00:30:27.986 "adrfam": "ipv4", 00:30:27.986 "trsvcid": "4420", 00:30:27.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:27.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:27.986 "hdgst": false, 00:30:27.986 "ddgst": false 00:30:27.986 }, 00:30:27.986 "method": "bdev_nvme_attach_controller" 00:30:27.986 }' 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:27.986 10:45:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:27.986 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:27.986 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:27.986 fio-3.35 00:30:27.986 Starting 2 threads 00:30:27.986 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.966 00:30:37.966 filename0: (groupid=0, jobs=1): err= 0: pid=4074283: Thu Jul 25 10:45:41 2024 00:30:37.966 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10026msec) 00:30:37.966 slat (nsec): min=5636, max=65868, avg=7561.03, stdev=3269.60 00:30:37.966 clat (usec): min=40859, max=42749, avg=41578.33, stdev=481.50 00:30:37.966 lat (usec): min=40865, max=42776, avg=41585.89, stdev=481.77 00:30:37.966 clat percentiles (usec): 00:30:37.966 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:37.966 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:30:37.966 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:37.966 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:30:37.966 | 99.99th=[42730] 00:30:37.966 bw ( KiB/s): min= 352, max= 416, per=49.92%, avg=384.00, stdev=10.38, samples=20 00:30:37.966 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:30:37.966 lat (msec) : 50=100.00% 00:30:37.966 cpu : usr=93.94%, sys=5.83%, ctx=9, majf=0, minf=109 00:30:37.966 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:37.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.966 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:37.966 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:37.966 filename1: (groupid=0, jobs=1): err= 0: pid=4074284: Thu Jul 25 10:45:41 2024 00:30:37.966 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10017msec) 00:30:37.966 slat (nsec): min=3856, max=29395, avg=7451.45, stdev=2705.83 00:30:37.966 clat (usec): min=40894, max=44641, avg=41541.08, stdev=542.35 00:30:37.966 lat (usec): min=40900, max=44661, avg=41548.53, stdev=542.48 00:30:37.966 clat percentiles (usec): 00:30:37.966 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:37.966 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:30:37.966 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:37.966 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:30:37.966 | 99.99th=[44827] 00:30:37.966 bw ( KiB/s): min= 352, max= 416, per=49.92%, avg=384.00, stdev=10.38, samples=20 00:30:37.966 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:30:37.966 lat (msec) : 50=100.00% 00:30:37.966 cpu : usr=93.64%, sys=6.11%, ctx=13, majf=0, minf=168 00:30:37.966 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:37.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.966 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:37.966 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:37.966 00:30:37.966 Run status group 0 (all jobs): 00:30:37.966 READ: bw=769KiB/s (788kB/s), 385KiB/s-385KiB/s (394kB/s-394kB/s), io=7712KiB (7897kB), run=10017-10026msec 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.966 00:30:37.966 real 0m11.388s 00:30:37.966 user 0m28.130s 00:30:37.966 sys 0m1.624s 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:37.966 10:45:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:37.966 ************************************ 00:30:37.966 END TEST fio_dif_1_multi_subsystems 00:30:37.966 ************************************ 00:30:37.966 10:45:41 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:37.966 10:45:41 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:37.966 10:45:41 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:37.966 10:45:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:37.966 ************************************ 00:30:37.966 START TEST fio_dif_rand_params 00:30:37.966 ************************************ 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:37.966 bdev_null0 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.966 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.226 [2024-07-25 10:45:41.681300] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.226 { 00:30:38.226 "params": { 00:30:38.226 "name": "Nvme$subsystem", 00:30:38.226 "trtype": "$TEST_TRANSPORT", 00:30:38.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.226 "adrfam": "ipv4", 00:30:38.226 "trsvcid": "$NVMF_PORT", 00:30:38.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.226 "hdgst": ${hdgst:-false}, 00:30:38.226 "ddgst": ${ddgst:-false} 00:30:38.226 }, 00:30:38.226 "method": "bdev_nvme_attach_controller" 00:30:38.226 } 00:30:38.226 EOF 00:30:38.226 )") 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:38.226 "params": { 00:30:38.226 "name": "Nvme0", 00:30:38.226 "trtype": "tcp", 00:30:38.226 "traddr": "10.0.0.2", 00:30:38.226 "adrfam": "ipv4", 00:30:38.226 "trsvcid": "4420", 00:30:38.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:38.226 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:38.226 "hdgst": false, 00:30:38.226 "ddgst": false 00:30:38.226 }, 00:30:38.226 "method": "bdev_nvme_attach_controller" 00:30:38.226 }' 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:38.226 10:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:38.485 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:38.485 ... 00:30:38.485 fio-3.35 00:30:38.485 Starting 3 threads 00:30:38.485 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.092 00:30:45.092 filename0: (groupid=0, jobs=1): err= 0: pid=4076273: Thu Jul 25 10:45:47 2024 00:30:45.092 read: IOPS=277, BW=34.7MiB/s (36.4MB/s)(175MiB/5046msec) 00:30:45.092 slat (nsec): min=5889, max=76791, avg=9155.10, stdev=3248.50 00:30:45.092 clat (usec): min=3610, max=91678, avg=10768.20, stdev=12119.36 00:30:45.092 lat (usec): min=3617, max=91690, avg=10777.35, stdev=12119.51 00:30:45.092 clat percentiles (usec): 00:30:45.092 | 1.00th=[ 3949], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5735], 00:30:45.092 | 30.00th=[ 6325], 40.00th=[ 6783], 50.00th=[ 7242], 60.00th=[ 7767], 00:30:45.092 | 70.00th=[ 8455], 80.00th=[ 9372], 90.00th=[10945], 95.00th=[49021], 00:30:45.092 | 99.00th=[51119], 99.50th=[51643], 99.90th=[91751], 99.95th=[91751], 00:30:45.092 | 99.99th=[91751] 00:30:45.092 bw ( KiB/s): min=26112, max=44800, per=34.24%, avg=35788.80, stdev=6649.21, samples=10 00:30:45.092 iops : min= 204, max= 350, avg=279.60, stdev=51.95, samples=10 00:30:45.092 lat (msec) : 4=1.14%, 10=85.14%, 20=5.36%, 50=5.43%, 100=2.93% 00:30:45.092 cpu : usr=92.77%, sys=6.90%, ctx=8, majf=0, minf=109 00:30:45.092 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:45.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.092 issued rwts: total=1400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:45.092 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:45.092 filename0: (groupid=0, jobs=1): err= 0: pid=4076274: Thu Jul 25 10:45:47 2024 00:30:45.092 read: IOPS=284, BW=35.6MiB/s (37.4MB/s)(178MiB/5004msec) 00:30:45.092 slat (nsec): min=5879, max=32483, avg=8942.43, stdev=2611.82 00:30:45.092 clat (usec): min=3372, max=90774, avg=10514.90, stdev=11923.71 00:30:45.092 lat (usec): min=3379, max=90786, avg=10523.85, stdev=11924.06 00:30:45.093 clat percentiles (usec): 00:30:45.093 | 1.00th=[ 3884], 5.00th=[ 4359], 10.00th=[ 4686], 20.00th=[ 5473], 00:30:45.093 | 30.00th=[ 6194], 40.00th=[ 6783], 50.00th=[ 7242], 60.00th=[ 7767], 00:30:45.093 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[10552], 95.00th=[49546], 00:30:45.093 | 99.00th=[52167], 99.50th=[52691], 99.90th=[52691], 99.95th=[90702], 00:30:45.093 | 99.99th=[90702] 00:30:45.093 bw ( KiB/s): min=23040, max=47616, per=34.85%, avg=36428.80, stdev=7526.82, samples=10 00:30:45.093 iops : min= 180, max= 372, avg=284.60, stdev=58.80, samples=10 00:30:45.093 lat (msec) : 4=1.68%, 10=85.48%, 20=4.91%, 50=3.72%, 100=4.21% 00:30:45.093 cpu : usr=92.66%, sys=7.00%, ctx=11, majf=0, minf=100 00:30:45.093 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:45.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.093 issued rwts: total=1426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:45.093 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:45.093 filename0: (groupid=0, jobs=1): err= 0: pid=4076275: Thu Jul 25 10:45:47 2024 00:30:45.093 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(162MiB/5028msec) 00:30:45.093 slat (nsec): min=5916, max=28040, avg=9206.57, stdev=2538.52 00:30:45.093 clat (usec): min=3859, max=95487, avg=11632.11, stdev=12854.71 00:30:45.093 lat (usec): min=3866, max=95494, avg=11641.31, stdev=12854.84 00:30:45.093 clat percentiles (usec): 00:30:45.093 | 1.00th=[ 4293], 5.00th=[ 4752], 10.00th=[ 5473], 20.00th=[ 6259], 00:30:45.093 | 30.00th=[ 6652], 40.00th=[ 7046], 50.00th=[ 7504], 60.00th=[ 8094], 00:30:45.093 | 70.00th=[ 8848], 80.00th=[ 9765], 90.00th=[11994], 95.00th=[50070], 00:30:45.093 | 99.00th=[52691], 99.50th=[53216], 99.90th=[55313], 99.95th=[95945], 00:30:45.093 | 99.99th=[95945] 00:30:45.093 bw ( KiB/s): min=23040, max=53504, per=31.64%, avg=33075.20, stdev=10345.98, samples=10 00:30:45.093 iops : min= 180, max= 418, avg=258.40, stdev=80.83, samples=10 00:30:45.093 lat (msec) : 4=0.23%, 10=81.78%, 20=8.34%, 50=4.79%, 100=4.86% 00:30:45.093 cpu : usr=93.02%, sys=6.60%, ctx=13, majf=0, minf=100 00:30:45.093 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:45.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.093 issued rwts: total=1295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:45.093 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:45.093 00:30:45.093 Run status group 0 (all jobs): 00:30:45.093 READ: bw=102MiB/s (107MB/s), 32.2MiB/s-35.6MiB/s (33.8MB/s-37.4MB/s), io=515MiB (540MB), run=5004-5046msec 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.093 bdev_null0 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.093 [2024-07-25 10:45:47.855123] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.093 bdev_null1 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.093 bdev_null2 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:45.093 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.094 { 00:30:45.094 "params": { 00:30:45.094 "name": "Nvme$subsystem", 00:30:45.094 "trtype": "$TEST_TRANSPORT", 00:30:45.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.094 "adrfam": "ipv4", 00:30:45.094 "trsvcid": "$NVMF_PORT", 00:30:45.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.094 "hdgst": ${hdgst:-false}, 00:30:45.094 "ddgst": ${ddgst:-false} 00:30:45.094 }, 00:30:45.094 "method": "bdev_nvme_attach_controller" 00:30:45.094 } 00:30:45.094 EOF 00:30:45.094 )") 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.094 { 00:30:45.094 "params": { 00:30:45.094 "name": "Nvme$subsystem", 00:30:45.094 "trtype": "$TEST_TRANSPORT", 00:30:45.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.094 "adrfam": "ipv4", 00:30:45.094 "trsvcid": "$NVMF_PORT", 00:30:45.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.094 "hdgst": ${hdgst:-false}, 00:30:45.094 "ddgst": ${ddgst:-false} 00:30:45.094 }, 00:30:45.094 "method": "bdev_nvme_attach_controller" 00:30:45.094 } 00:30:45.094 EOF 00:30:45.094 )") 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.094 { 00:30:45.094 "params": { 00:30:45.094 "name": "Nvme$subsystem", 00:30:45.094 "trtype": "$TEST_TRANSPORT", 00:30:45.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.094 "adrfam": "ipv4", 00:30:45.094 "trsvcid": "$NVMF_PORT", 00:30:45.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.094 "hdgst": ${hdgst:-false}, 00:30:45.094 "ddgst": ${ddgst:-false} 00:30:45.094 }, 00:30:45.094 "method": "bdev_nvme_attach_controller" 00:30:45.094 } 00:30:45.094 EOF 00:30:45.094 )") 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:45.094 "params": { 00:30:45.094 "name": "Nvme0", 00:30:45.094 "trtype": "tcp", 00:30:45.094 "traddr": "10.0.0.2", 00:30:45.094 "adrfam": "ipv4", 00:30:45.094 "trsvcid": "4420", 00:30:45.094 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:45.094 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:45.094 "hdgst": false, 00:30:45.094 "ddgst": false 00:30:45.094 }, 00:30:45.094 "method": "bdev_nvme_attach_controller" 00:30:45.094 },{ 00:30:45.094 "params": { 00:30:45.094 "name": "Nvme1", 00:30:45.094 "trtype": "tcp", 00:30:45.094 "traddr": "10.0.0.2", 00:30:45.094 "adrfam": "ipv4", 00:30:45.094 "trsvcid": "4420", 00:30:45.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:45.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:45.094 "hdgst": false, 00:30:45.094 "ddgst": false 00:30:45.094 }, 00:30:45.094 "method": "bdev_nvme_attach_controller" 00:30:45.094 },{ 00:30:45.094 "params": { 00:30:45.094 "name": "Nvme2", 00:30:45.094 "trtype": "tcp", 00:30:45.094 "traddr": "10.0.0.2", 00:30:45.094 "adrfam": "ipv4", 00:30:45.094 "trsvcid": "4420", 00:30:45.094 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:45.094 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:45.094 "hdgst": false, 00:30:45.094 "ddgst": false 00:30:45.094 }, 00:30:45.094 "method": "bdev_nvme_attach_controller" 00:30:45.094 }' 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:45.094 10:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:45.094 10:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:45.094 10:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:45.094 10:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:45.094 10:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:45.094 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:45.094 ... 00:30:45.094 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:45.094 ... 00:30:45.094 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:45.094 ... 00:30:45.094 fio-3.35 00:30:45.094 Starting 24 threads 00:30:45.094 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.302 00:30:57.302 filename0: (groupid=0, jobs=1): err= 0: pid=4077473: Thu Jul 25 10:45:59 2024 00:30:57.302 read: IOPS=626, BW=2505KiB/s (2566kB/s)(24.5MiB/10012msec) 00:30:57.302 slat (nsec): min=3933, max=63782, avg=13299.04, stdev=5926.18 00:30:57.302 clat (usec): min=5991, max=49590, avg=25428.37, stdev=3905.32 00:30:57.302 lat (usec): min=5999, max=49604, avg=25441.67, stdev=3906.00 00:30:57.302 clat percentiles (usec): 00:30:57.302 | 1.00th=[ 8586], 5.00th=[18220], 10.00th=[23725], 20.00th=[25297], 00:30:57.302 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:30:57.302 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26870], 95.00th=[27657], 00:30:57.302 | 99.00th=[39060], 99.50th=[46924], 99.90th=[49546], 99.95th=[49546], 00:30:57.302 | 99.99th=[49546] 00:30:57.302 bw ( KiB/s): min= 2395, max= 2704, per=4.27%, avg=2497.58, stdev=93.64, samples=19 00:30:57.302 iops : min= 598, max= 676, avg=624.21, stdev=23.52, samples=19 00:30:57.302 lat (msec) : 10=1.48%, 20=4.10%, 50=94.42% 00:30:57.302 cpu : usr=97.34%, sys=2.31%, ctx=19, majf=0, minf=0 00:30:57.302 IO depths : 1=4.6%, 2=9.2%, 4=20.8%, 8=57.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:30:57.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.302 complete : 0=0.0%, 4=93.3%, 8=1.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.302 issued rwts: total=6271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.302 filename0: (groupid=0, jobs=1): err= 0: pid=4077474: Thu Jul 25 10:45:59 2024 00:30:57.302 read: IOPS=611, BW=2445KiB/s (2503kB/s)(23.9MiB/10001msec) 00:30:57.302 slat (nsec): min=5566, max=88236, avg=33984.67, stdev=16152.57 00:30:57.302 clat (usec): min=12565, max=43576, avg=25913.77, stdev=1581.74 00:30:57.302 lat (usec): min=12579, max=43591, avg=25947.75, stdev=1580.57 00:30:57.302 clat percentiles (usec): 00:30:57.302 | 1.00th=[20841], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:30:57.302 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:30:57.302 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[27132], 00:30:57.302 | 99.00th=[30802], 99.50th=[34866], 99.90th=[41157], 99.95th=[43254], 00:30:57.302 | 99.99th=[43779] 00:30:57.302 bw ( KiB/s): min= 2304, max= 2560, per=4.18%, avg=2445.42, stdev=69.70, samples=19 00:30:57.302 iops : min= 576, max= 640, avg=611.32, stdev=17.42, samples=19 00:30:57.302 lat (msec) : 20=0.83%, 50=99.17% 00:30:57.302 cpu : usr=97.32%, sys=2.32%, ctx=24, majf=0, minf=9 00:30:57.302 IO depths : 1=5.6%, 2=11.3%, 4=23.5%, 8=52.5%, 16=7.1%, 32=0.0%, >=64=0.0% 00:30:57.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.302 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.302 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.302 filename0: (groupid=0, jobs=1): err= 0: pid=4077475: Thu Jul 25 10:45:59 2024 00:30:57.302 read: IOPS=612, BW=2449KiB/s (2508kB/s)(23.9MiB/10005msec) 00:30:57.302 slat (nsec): min=5008, max=88398, avg=23980.48, stdev=14409.47 00:30:57.302 clat (usec): min=14555, max=39897, avg=25951.40, stdev=2091.04 00:30:57.302 lat (usec): min=14562, max=39911, avg=25975.38, stdev=2090.58 00:30:57.302 clat percentiles (usec): 00:30:57.302 | 1.00th=[17171], 5.00th=[23987], 10.00th=[25035], 20.00th=[25560], 00:30:57.302 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:30:57.302 | 70.00th=[26346], 80.00th=[26608], 90.00th=[26870], 95.00th=[27395], 00:30:57.302 | 99.00th=[34866], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:30:57.302 | 99.99th=[40109] 00:30:57.302 bw ( KiB/s): min= 2304, max= 2608, per=4.18%, avg=2444.26, stdev=72.11, samples=19 00:30:57.302 iops : min= 576, max= 652, avg=611.00, stdev=18.01, samples=19 00:30:57.302 lat (msec) : 20=2.11%, 50=97.89% 00:30:57.302 cpu : usr=97.25%, sys=2.40%, ctx=17, majf=0, minf=9 00:30:57.302 IO depths : 1=4.8%, 2=9.6%, 4=21.8%, 8=56.0%, 16=7.9%, 32=0.0%, >=64=0.0% 00:30:57.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.302 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.302 issued rwts: total=6126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.302 filename0: (groupid=0, jobs=1): err= 0: pid=4077476: Thu Jul 25 10:45:59 2024 00:30:57.302 read: IOPS=608, BW=2434KiB/s (2492kB/s)(23.8MiB/10003msec) 00:30:57.302 slat (usec): min=5, max=234, avg=33.67, stdev=18.38 00:30:57.302 clat (usec): min=3404, max=57546, avg=25999.76, stdev=3651.97 00:30:57.302 lat (usec): min=3411, max=57559, avg=26033.43, stdev=3651.09 00:30:57.302 clat percentiles (usec): 00:30:57.302 | 1.00th=[10421], 5.00th=[24249], 10.00th=[24773], 20.00th=[25297], 00:30:57.302 | 30.00th=[25560], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:30:57.302 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27132], 95.00th=[27919], 00:30:57.302 | 99.00th=[43254], 99.50th=[44827], 99.90th=[47449], 99.95th=[57410], 00:30:57.302 | 99.99th=[57410] 00:30:57.302 bw ( KiB/s): min= 2056, max= 2560, per=4.14%, avg=2420.42, stdev=104.35, samples=19 00:30:57.302 iops : min= 514, max= 640, avg=604.95, stdev=26.10, samples=19 00:30:57.302 lat (msec) : 4=0.15%, 10=0.80%, 20=1.49%, 50=97.47%, 100=0.08% 00:30:57.302 cpu : usr=94.45%, sys=3.31%, ctx=92, majf=0, minf=9 00:30:57.302 IO depths : 1=4.5%, 2=8.9%, 4=19.5%, 8=58.1%, 16=9.0%, 32=0.0%, >=64=0.0% 00:30:57.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.302 complete : 0=0.0%, 4=92.9%, 8=2.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.302 issued rwts: total=6087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.302 filename0: (groupid=0, jobs=1): err= 0: pid=4077477: Thu Jul 25 10:45:59 2024 00:30:57.302 read: IOPS=608, BW=2433KiB/s (2492kB/s)(23.8MiB/10011msec) 00:30:57.302 slat (nsec): min=4767, max=85425, avg=29998.87, stdev=16490.55 00:30:57.303 clat (usec): min=11080, max=56969, avg=26051.79, stdev=2413.13 00:30:57.303 lat (usec): min=11088, max=56982, avg=26081.78, stdev=2412.69 00:30:57.303 clat percentiles (usec): 00:30:57.303 | 1.00th=[16909], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:30:57.303 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:30:57.303 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26870], 95.00th=[27657], 00:30:57.303 | 99.00th=[36963], 99.50th=[40109], 99.90th=[44827], 99.95th=[44827], 00:30:57.303 | 99.99th=[56886] 00:30:57.303 bw ( KiB/s): min= 2304, max= 2560, per=4.17%, avg=2438.47, stdev=67.14, samples=19 00:30:57.303 iops : min= 576, max= 640, avg=609.58, stdev=16.79, samples=19 00:30:57.303 lat (msec) : 20=1.25%, 50=98.70%, 100=0.05% 00:30:57.303 cpu : usr=97.33%, sys=2.30%, ctx=17, majf=0, minf=9 00:30:57.303 IO depths : 1=5.0%, 2=10.2%, 4=22.2%, 8=55.0%, 16=7.6%, 32=0.0%, >=64=0.0% 00:30:57.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.303 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.303 issued rwts: total=6090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.303 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.303 filename0: (groupid=0, jobs=1): err= 0: pid=4077478: Thu Jul 25 10:45:59 2024 00:30:57.303 read: IOPS=610, BW=2444KiB/s (2502kB/s)(23.9MiB/10005msec) 00:30:57.303 slat (usec): min=6, max=220, avg=35.77, stdev=15.10 00:30:57.303 clat (usec): min=13876, max=45841, avg=25893.79, stdev=1780.25 00:30:57.303 lat (usec): min=13890, max=45848, avg=25929.56, stdev=1779.86 00:30:57.303 clat percentiles (usec): 00:30:57.303 | 1.00th=[18744], 5.00th=[24511], 10.00th=[25035], 20.00th=[25297], 00:30:57.303 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:30:57.303 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26870], 95.00th=[27395], 00:30:57.303 | 99.00th=[32113], 99.50th=[36963], 99.90th=[40109], 99.95th=[45876], 00:30:57.303 | 99.99th=[45876] 00:30:57.303 bw ( KiB/s): min= 2304, max= 2560, per=4.17%, avg=2438.21, stdev=52.16, samples=19 00:30:57.303 iops : min= 576, max= 640, avg=609.47, stdev=13.06, samples=19 00:30:57.303 lat (msec) : 20=1.24%, 50=98.76% 00:30:57.303 cpu : usr=95.81%, sys=2.61%, ctx=120, majf=0, minf=9 00:30:57.303 IO depths : 1=5.4%, 2=11.2%, 4=23.9%, 8=52.2%, 16=7.1%, 32=0.0%, >=64=0.0% 00:30:57.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.303 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.303 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.303 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.303 filename0: (groupid=0, jobs=1): err= 0: pid=4077479: Thu Jul 25 10:45:59 2024 00:30:57.303 read: IOPS=604, BW=2416KiB/s (2474kB/s)(23.6MiB/10005msec) 00:30:57.303 slat (nsec): min=6342, max=92732, avg=26212.75, stdev=16439.42 00:30:57.303 clat (usec): min=7490, max=48512, avg=26270.11, stdev=3930.29 00:30:57.303 lat (usec): min=7498, max=48520, avg=26296.33, stdev=3930.25 00:30:57.303 clat percentiles (usec): 00:30:57.303 | 1.00th=[13829], 5.00th=[20841], 10.00th=[24773], 20.00th=[25297], 00:30:57.303 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:30:57.303 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27657], 95.00th=[33817], 00:30:57.303 | 99.00th=[42730], 99.50th=[45351], 99.90th=[46924], 99.95th=[46924], 00:30:57.303 | 99.99th=[48497] 00:30:57.303 bw ( KiB/s): min= 2176, max= 2560, per=4.12%, avg=2409.84, stdev=84.30, samples=19 00:30:57.303 iops : min= 544, max= 640, avg=602.42, stdev=21.07, samples=19 00:30:57.303 lat (msec) : 10=0.30%, 20=3.95%, 50=95.75% 00:30:57.303 cpu : usr=97.18%, sys=2.48%, ctx=17, majf=0, minf=9 00:30:57.303 IO depths : 1=4.0%, 2=8.1%, 4=19.2%, 8=59.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:30:57.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.303 complete : 0=0.0%, 4=92.7%, 8=1.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.303 issued rwts: total=6044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.303 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.303 filename0: (groupid=0, jobs=1): err= 0: pid=4077480: Thu Jul 25 10:45:59 2024 00:30:57.303 read: IOPS=599, BW=2398KiB/s (2456kB/s)(23.4MiB/10003msec) 00:30:57.303 slat (usec): min=5, max=103, avg=23.96, stdev=16.58 00:30:57.303 clat (usec): min=3694, max=52381, avg=26542.55, stdev=4096.91 00:30:57.303 lat (usec): min=3702, max=52396, avg=26566.51, stdev=4097.08 00:30:57.303 clat percentiles (usec): 00:30:57.303 | 1.00th=[11338], 5.00th=[24249], 10.00th=[25035], 20.00th=[25297], 00:30:57.303 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:30:57.303 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27919], 95.00th=[33817], 00:30:57.303 | 99.00th=[43779], 99.50th=[47449], 99.90th=[50070], 99.95th=[52167], 00:30:57.303 | 99.99th=[52167] 00:30:57.303 bw ( KiB/s): min= 2240, max= 2512, per=4.07%, avg=2382.11, stdev=73.12, samples=19 00:30:57.303 iops : min= 560, max= 628, avg=595.37, stdev=18.29, samples=19 00:30:57.303 lat (msec) : 4=0.12%, 10=0.43%, 20=2.20%, 50=97.13%, 100=0.12% 00:30:57.303 cpu : usr=97.27%, sys=2.37%, ctx=19, majf=0, minf=10 00:30:57.303 IO depths : 1=0.9%, 2=2.3%, 4=9.6%, 8=72.9%, 16=14.2%, 32=0.0%, >=64=0.0% 00:30:57.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.303 complete : 0=0.0%, 4=91.1%, 8=5.6%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.303 issued rwts: total=5997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.303 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.303 filename1: (groupid=0, jobs=1): err= 0: pid=4077481: Thu Jul 25 10:45:59 2024 00:30:57.303 read: IOPS=562, BW=2252KiB/s (2306kB/s)(22.0MiB/10004msec) 00:30:57.303 slat (nsec): min=6231, max=85535, avg=20182.83, stdev=12464.73 00:30:57.303 clat (usec): min=17428, max=49757, avg=28262.59, stdev=4919.26 00:30:57.303 lat (usec): min=17440, max=49769, avg=28282.77, stdev=4917.22 00:30:57.303 clat percentiles (usec): 00:30:57.303 | 1.00th=[24249], 5.00th=[25297], 10.00th=[25560], 20.00th=[25822], 00:30:57.303 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:30:57.303 | 70.00th=[26870], 80.00th=[28967], 90.00th=[34866], 95.00th=[40109], 00:30:57.303 | 99.00th=[48497], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:30:57.303 | 99.99th=[49546] 00:30:57.303 bw ( KiB/s): min= 1920, max= 2432, per=3.85%, avg=2249.79, stdev=166.75, samples=19 00:30:57.303 iops : min= 480, max= 608, avg=562.37, stdev=41.70, samples=19 00:30:57.303 lat (msec) : 20=0.04%, 50=99.96% 00:30:57.303 cpu : usr=96.71%, sys=2.94%, ctx=26, majf=0, minf=9 00:30:57.303 IO depths : 1=6.0%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.5%, 32=0.0%, >=64=0.0% 00:30:57.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.303 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.303 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.303 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.303 filename1: (groupid=0, jobs=1): err= 0: pid=4077482: Thu Jul 25 10:45:59 2024 00:30:57.303 read: IOPS=608, BW=2433KiB/s (2491kB/s)(23.8MiB/10017msec) 00:30:57.303 slat (nsec): min=6274, max=90775, avg=27238.79, stdev=16118.10 00:30:57.303 clat (usec): min=15342, max=44856, avg=26083.67, stdev=1989.80 00:30:57.303 lat (usec): min=15349, max=44868, avg=26110.91, stdev=1988.60 00:30:57.303 clat percentiles (usec): 00:30:57.303 | 1.00th=[20579], 5.00th=[24773], 10.00th=[25035], 20.00th=[25560], 00:30:57.303 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:30:57.303 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26870], 95.00th=[27657], 00:30:57.303 | 99.00th=[33817], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:30:57.303 | 99.99th=[44827] 00:30:57.303 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2434.42, stdev=69.74, samples=19 00:30:57.303 iops : min= 576, max= 640, avg=608.53, stdev=17.45, samples=19 00:30:57.303 lat (msec) : 20=0.92%, 50=99.08% 00:30:57.303 cpu : usr=97.18%, sys=2.45%, ctx=24, majf=0, minf=9 00:30:57.303 IO depths : 1=4.9%, 2=10.2%, 4=23.2%, 8=54.1%, 16=7.6%, 32=0.0%, >=64=0.0% 00:30:57.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.303 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.303 issued rwts: total=6092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.303 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.303 filename1: (groupid=0, jobs=1): err= 0: pid=4077483: Thu Jul 25 10:45:59 2024 00:30:57.303 read: IOPS=609, BW=2437KiB/s (2496kB/s)(23.8MiB/10017msec) 00:30:57.303 slat (usec): min=6, max=102, avg=34.15, stdev=16.71 00:30:57.303 clat (usec): min=14584, max=44673, avg=25947.86, stdev=2045.97 00:30:57.303 lat (usec): min=14592, max=44689, avg=25982.01, stdev=2045.12 00:30:57.303 clat percentiles (usec): 00:30:57.303 | 1.00th=[17695], 5.00th=[24511], 10.00th=[25035], 20.00th=[25297], 00:30:57.303 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:30:57.303 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26870], 95.00th=[27395], 00:30:57.303 | 99.00th=[35390], 99.50th=[36963], 99.90th=[44303], 99.95th=[44827], 00:30:57.303 | 99.99th=[44827] 00:30:57.303 bw ( KiB/s): min= 2304, max= 2560, per=4.17%, avg=2440.68, stdev=74.39, samples=19 00:30:57.303 iops : min= 576, max= 640, avg=610.11, stdev=18.60, samples=19 00:30:57.303 lat (msec) : 20=1.31%, 50=98.69% 00:30:57.303 cpu : usr=97.14%, sys=2.47%, ctx=16, majf=0, minf=9 00:30:57.303 IO depths : 1=5.4%, 2=10.8%, 4=23.4%, 8=53.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:30:57.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.303 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.303 issued rwts: total=6103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.303 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.303 filename1: (groupid=0, jobs=1): err= 0: pid=4077484: Thu Jul 25 10:45:59 2024 00:30:57.303 read: IOPS=636, BW=2544KiB/s (2605kB/s)(24.9MiB/10003msec) 00:30:57.303 slat (nsec): min=6210, max=60860, avg=12235.47, stdev=5571.07 00:30:57.303 clat (usec): min=1664, max=49967, avg=25055.73, stdev=4625.60 00:30:57.303 lat (usec): min=1676, max=49973, avg=25067.96, stdev=4625.76 00:30:57.303 clat percentiles (usec): 00:30:57.303 | 1.00th=[ 4228], 5.00th=[15664], 10.00th=[22938], 20.00th=[25035], 00:30:57.304 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:30:57.304 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26870], 95.00th=[27395], 00:30:57.304 | 99.00th=[39060], 99.50th=[43779], 99.90th=[48497], 99.95th=[49546], 00:30:57.304 | 99.99th=[50070] 00:30:57.304 bw ( KiB/s): min= 2427, max= 3072, per=4.35%, avg=2541.47, stdev=159.66, samples=19 00:30:57.304 iops : min= 606, max= 768, avg=635.26, stdev=39.97, samples=19 00:30:57.304 lat (msec) : 2=0.06%, 4=0.88%, 10=2.11%, 20=4.34%, 50=92.61% 00:30:57.304 cpu : usr=96.96%, sys=2.69%, ctx=16, majf=0, minf=9 00:30:57.304 IO depths : 1=4.8%, 2=9.9%, 4=22.1%, 8=55.2%, 16=8.0%, 32=0.0%, >=64=0.0% 00:30:57.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.304 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.304 issued rwts: total=6362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.304 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.304 filename1: (groupid=0, jobs=1): err= 0: pid=4077485: Thu Jul 25 10:45:59 2024 00:30:57.304 read: IOPS=622, BW=2490KiB/s (2550kB/s)(24.3MiB/10009msec) 00:30:57.304 slat (nsec): min=6416, max=59641, avg=13593.62, stdev=6071.27 00:30:57.304 clat (usec): min=6635, max=40063, avg=25587.91, stdev=2454.48 00:30:57.304 lat (usec): min=6648, max=40094, avg=25601.50, stdev=2455.29 00:30:57.304 clat percentiles (usec): 00:30:57.304 | 1.00th=[10421], 5.00th=[24511], 10.00th=[25035], 20.00th=[25297], 00:30:57.304 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:30:57.304 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:30:57.304 | 99.00th=[27919], 99.50th=[28967], 99.90th=[38011], 99.95th=[40109], 00:30:57.304 | 99.99th=[40109] 00:30:57.304 bw ( KiB/s): min= 2427, max= 2608, per=4.25%, avg=2487.84, stdev=68.69, samples=19 00:30:57.304 iops : min= 606, max= 652, avg=621.89, stdev=17.18, samples=19 00:30:57.304 lat (msec) : 10=0.82%, 20=1.94%, 50=97.24% 00:30:57.304 cpu : usr=97.21%, sys=2.45%, ctx=19, majf=0, minf=9 00:30:57.304 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:30:57.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.304 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.304 issued rwts: total=6230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.304 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.304 filename1: (groupid=0, jobs=1): err= 0: pid=4077486: Thu Jul 25 10:45:59 2024 00:30:57.304 read: IOPS=610, BW=2444KiB/s (2502kB/s)(23.9MiB/10004msec) 00:30:57.304 slat (nsec): min=6270, max=87000, avg=24042.43, stdev=16230.07 00:30:57.304 clat (usec): min=8882, max=52763, avg=26005.45, stdev=2814.13 00:30:57.304 lat (usec): min=8893, max=52781, avg=26029.49, stdev=2814.63 00:30:57.304 clat percentiles (usec): 00:30:57.304 | 1.00th=[15795], 5.00th=[24249], 10.00th=[25035], 20.00th=[25297], 00:30:57.304 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:30:57.304 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26870], 95.00th=[27919], 00:30:57.304 | 99.00th=[38011], 99.50th=[44303], 99.90th=[50070], 99.95th=[50070], 00:30:57.304 | 99.99th=[52691] 00:30:57.304 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2431.74, stdev=57.70, samples=19 00:30:57.304 iops : min= 576, max= 640, avg=607.89, stdev=14.43, samples=19 00:30:57.304 lat (msec) : 10=0.03%, 20=2.52%, 50=97.40%, 100=0.05% 00:30:57.304 cpu : usr=97.35%, sys=2.30%, ctx=15, majf=0, minf=9 00:30:57.304 IO depths : 1=3.2%, 2=8.2%, 4=21.6%, 8=57.2%, 16=9.8%, 32=0.0%, >=64=0.0% 00:30:57.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.304 complete : 0=0.0%, 4=93.6%, 8=1.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.304 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.304 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.304 filename1: (groupid=0, jobs=1): err= 0: pid=4077487: Thu Jul 25 10:45:59 2024 00:30:57.304 read: IOPS=626, BW=2505KiB/s (2565kB/s)(24.5MiB/10023msec) 00:30:57.304 slat (nsec): min=6249, max=64950, avg=12013.29, stdev=5621.62 00:30:57.304 clat (usec): min=6583, max=45397, avg=25463.85, stdev=3912.18 00:30:57.304 lat (usec): min=6595, max=45411, avg=25475.86, stdev=3912.81 00:30:57.304 clat percentiles (usec): 00:30:57.304 | 1.00th=[10945], 5.00th=[16909], 10.00th=[22152], 20.00th=[25297], 00:30:57.304 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:30:57.304 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27132], 95.00th=[28443], 00:30:57.304 | 99.00th=[40633], 99.50th=[41157], 99.90th=[44303], 99.95th=[45351], 00:30:57.304 | 99.99th=[45351] 00:30:57.304 bw ( KiB/s): min= 2400, max= 2736, per=4.28%, avg=2503.40, stdev=86.10, samples=20 00:30:57.304 iops : min= 600, max= 684, avg=625.80, stdev=21.51, samples=20 00:30:57.304 lat (msec) : 10=0.41%, 20=6.92%, 50=92.67% 00:30:57.304 cpu : usr=96.85%, sys=2.80%, ctx=36, majf=0, minf=9 00:30:57.304 IO depths : 1=3.6%, 2=7.5%, 4=17.8%, 8=61.4%, 16=9.7%, 32=0.0%, >=64=0.0% 00:30:57.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.304 complete : 0=0.0%, 4=92.5%, 8=2.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.304 issued rwts: total=6276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.304 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.304 filename1: (groupid=0, jobs=1): err= 0: pid=4077488: Thu Jul 25 10:45:59 2024 00:30:57.304 read: IOPS=601, BW=2408KiB/s (2466kB/s)(23.5MiB/10005msec) 00:30:57.304 slat (usec): min=6, max=105, avg=24.51, stdev=16.12 00:30:57.304 clat (usec): min=5127, max=46842, avg=26393.97, stdev=3742.02 00:30:57.304 lat (usec): min=5134, max=46867, avg=26418.48, stdev=3741.38 00:30:57.304 clat percentiles (usec): 00:30:57.304 | 1.00th=[14484], 5.00th=[23725], 10.00th=[24773], 20.00th=[25560], 00:30:57.304 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:30:57.304 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27657], 95.00th=[33162], 00:30:57.304 | 99.00th=[42206], 99.50th=[43254], 99.90th=[46924], 99.95th=[46924], 00:30:57.304 | 99.99th=[46924] 00:30:57.304 bw ( KiB/s): min= 2304, max= 2560, per=4.11%, avg=2402.63, stdev=66.51, samples=19 00:30:57.304 iops : min= 576, max= 640, avg=600.58, stdev=16.66, samples=19 00:30:57.304 lat (msec) : 10=0.48%, 20=2.71%, 50=96.81% 00:30:57.304 cpu : usr=97.61%, sys=2.04%, ctx=17, majf=0, minf=9 00:30:57.304 IO depths : 1=2.8%, 2=5.6%, 4=13.9%, 8=65.9%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:57.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.304 complete : 0=0.0%, 4=91.8%, 8=4.5%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.304 issued rwts: total=6023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.304 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.304 filename2: (groupid=0, jobs=1): err= 0: pid=4077489: Thu Jul 25 10:45:59 2024 00:30:57.304 read: IOPS=607, BW=2431KiB/s (2489kB/s)(23.8MiB/10004msec) 00:30:57.304 slat (nsec): min=6165, max=85052, avg=28082.98, stdev=17384.82 00:30:57.304 clat (usec): min=6414, max=48435, avg=26105.46, stdev=3220.69 00:30:57.304 lat (usec): min=6420, max=48452, avg=26133.54, stdev=3220.75 00:30:57.304 clat percentiles (usec): 00:30:57.304 | 1.00th=[15533], 5.00th=[23987], 10.00th=[25035], 20.00th=[25297], 00:30:57.304 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:30:57.304 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27132], 95.00th=[29230], 00:30:57.304 | 99.00th=[41681], 99.50th=[44303], 99.90th=[48497], 99.95th=[48497], 00:30:57.304 | 99.99th=[48497] 00:30:57.304 bw ( KiB/s): min= 2219, max= 2560, per=4.13%, avg=2417.42, stdev=77.89, samples=19 00:30:57.304 iops : min= 554, max= 640, avg=604.21, stdev=19.57, samples=19 00:30:57.304 lat (msec) : 10=0.49%, 20=2.29%, 50=97.22% 00:30:57.304 cpu : usr=97.38%, sys=2.25%, ctx=16, majf=0, minf=9 00:30:57.304 IO depths : 1=3.3%, 2=6.9%, 4=18.5%, 8=61.4%, 16=10.0%, 32=0.0%, >=64=0.0% 00:30:57.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.304 complete : 0=0.0%, 4=92.9%, 8=2.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.304 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.304 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.304 filename2: (groupid=0, jobs=1): err= 0: pid=4077490: Thu Jul 25 10:45:59 2024 00:30:57.304 read: IOPS=610, BW=2442KiB/s (2501kB/s)(23.9MiB/10003msec) 00:30:57.304 slat (nsec): min=4519, max=99395, avg=35014.11, stdev=16054.46 00:30:57.304 clat (usec): min=8959, max=50449, avg=25920.11, stdev=1918.56 00:30:57.304 lat (usec): min=8966, max=50463, avg=25955.13, stdev=1917.76 00:30:57.304 clat percentiles (usec): 00:30:57.304 | 1.00th=[17957], 5.00th=[24511], 10.00th=[25035], 20.00th=[25297], 00:30:57.304 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:30:57.304 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26870], 95.00th=[27132], 00:30:57.304 | 99.00th=[34341], 99.50th=[37487], 99.90th=[42730], 99.95th=[42730], 00:30:57.304 | 99.99th=[50594] 00:30:57.304 bw ( KiB/s): min= 2304, max= 2560, per=4.18%, avg=2443.47, stdev=68.78, samples=19 00:30:57.304 iops : min= 576, max= 640, avg=610.84, stdev=17.17, samples=19 00:30:57.304 lat (msec) : 10=0.03%, 20=1.39%, 50=98.53%, 100=0.05% 00:30:57.304 cpu : usr=97.28%, sys=2.36%, ctx=21, majf=0, minf=9 00:30:57.304 IO depths : 1=5.6%, 2=11.3%, 4=23.5%, 8=52.5%, 16=7.1%, 32=0.0%, >=64=0.0% 00:30:57.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.304 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.304 issued rwts: total=6108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.304 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.304 filename2: (groupid=0, jobs=1): err= 0: pid=4077491: Thu Jul 25 10:45:59 2024 00:30:57.304 read: IOPS=611, BW=2445KiB/s (2504kB/s)(23.9MiB/10019msec) 00:30:57.304 slat (usec): min=6, max=104, avg=19.58, stdev=12.10 00:30:57.304 clat (usec): min=10490, max=42376, avg=26017.88, stdev=2052.37 00:30:57.304 lat (usec): min=10502, max=42392, avg=26037.46, stdev=2051.88 00:30:57.304 clat percentiles (usec): 00:30:57.304 | 1.00th=[16909], 5.00th=[24773], 10.00th=[25035], 20.00th=[25560], 00:30:57.304 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:30:57.304 | 70.00th=[26346], 80.00th=[26608], 90.00th=[26870], 95.00th=[27395], 00:30:57.305 | 99.00th=[33162], 99.50th=[38536], 99.90th=[41157], 99.95th=[41157], 00:30:57.305 | 99.99th=[42206] 00:30:57.305 bw ( KiB/s): min= 2304, max= 2560, per=4.18%, avg=2444.25, stdev=53.76, samples=20 00:30:57.305 iops : min= 576, max= 640, avg=611.00, stdev=13.40, samples=20 00:30:57.305 lat (msec) : 20=1.70%, 50=98.30% 00:30:57.305 cpu : usr=96.97%, sys=2.67%, ctx=23, majf=0, minf=9 00:30:57.305 IO depths : 1=4.7%, 2=9.7%, 4=21.8%, 8=55.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:30:57.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.305 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.305 issued rwts: total=6124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.305 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.305 filename2: (groupid=0, jobs=1): err= 0: pid=4077492: Thu Jul 25 10:45:59 2024 00:30:57.305 read: IOPS=609, BW=2439KiB/s (2497kB/s)(23.8MiB/10009msec) 00:30:57.305 slat (nsec): min=6362, max=87670, avg=30420.29, stdev=16008.05 00:30:57.305 clat (usec): min=13108, max=41093, avg=25994.37, stdev=1772.81 00:30:57.305 lat (usec): min=13121, max=41107, avg=26024.79, stdev=1771.08 00:30:57.305 clat percentiles (usec): 00:30:57.305 | 1.00th=[20317], 5.00th=[24511], 10.00th=[25035], 20.00th=[25297], 00:30:57.305 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:30:57.305 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26870], 95.00th=[27395], 00:30:57.305 | 99.00th=[34341], 99.50th=[34341], 99.90th=[39060], 99.95th=[41157], 00:30:57.305 | 99.99th=[41157] 00:30:57.305 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2434.00, stdev=62.53, samples=19 00:30:57.305 iops : min= 576, max= 640, avg=608.42, stdev=15.64, samples=19 00:30:57.305 lat (msec) : 20=0.95%, 50=99.05% 00:30:57.305 cpu : usr=97.08%, sys=2.58%, ctx=17, majf=0, minf=9 00:30:57.305 IO depths : 1=5.0%, 2=10.0%, 4=22.2%, 8=55.1%, 16=7.7%, 32=0.0%, >=64=0.0% 00:30:57.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.305 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.305 issued rwts: total=6102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.305 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.305 filename2: (groupid=0, jobs=1): err= 0: pid=4077493: Thu Jul 25 10:45:59 2024 00:30:57.305 read: IOPS=620, BW=2484KiB/s (2543kB/s)(24.3MiB/10021msec) 00:30:57.305 slat (nsec): min=6341, max=88962, avg=14595.57, stdev=8139.87 00:30:57.305 clat (usec): min=2825, max=49034, avg=25657.57, stdev=3297.45 00:30:57.305 lat (usec): min=2840, max=49054, avg=25672.16, stdev=3297.75 00:30:57.305 clat percentiles (usec): 00:30:57.305 | 1.00th=[10683], 5.00th=[20841], 10.00th=[24773], 20.00th=[25297], 00:30:57.305 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:30:57.305 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26870], 95.00th=[27395], 00:30:57.305 | 99.00th=[37487], 99.50th=[41681], 99.90th=[47973], 99.95th=[49021], 00:30:57.305 | 99.99th=[49021] 00:30:57.305 bw ( KiB/s): min= 2400, max= 2688, per=4.25%, avg=2482.40, stdev=85.34, samples=20 00:30:57.305 iops : min= 600, max= 672, avg=620.60, stdev=21.34, samples=20 00:30:57.305 lat (msec) : 4=0.26%, 10=0.64%, 20=3.57%, 50=95.53% 00:30:57.305 cpu : usr=96.65%, sys=2.98%, ctx=24, majf=0, minf=9 00:30:57.305 IO depths : 1=4.6%, 2=9.3%, 4=20.7%, 8=57.2%, 16=8.1%, 32=0.0%, >=64=0.0% 00:30:57.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.305 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.305 issued rwts: total=6222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.305 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.305 filename2: (groupid=0, jobs=1): err= 0: pid=4077494: Thu Jul 25 10:45:59 2024 00:30:57.305 read: IOPS=596, BW=2387KiB/s (2444kB/s)(23.3MiB/10001msec) 00:30:57.305 slat (nsec): min=6253, max=81082, avg=23479.64, stdev=16911.15 00:30:57.305 clat (usec): min=6810, max=48413, avg=26640.19, stdev=4715.29 00:30:57.305 lat (usec): min=6822, max=48427, avg=26663.66, stdev=4714.38 00:30:57.305 clat percentiles (usec): 00:30:57.305 | 1.00th=[10421], 5.00th=[21103], 10.00th=[24511], 20.00th=[25297], 00:30:57.305 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:30:57.305 | 70.00th=[26346], 80.00th=[26870], 90.00th=[31065], 95.00th=[35914], 00:30:57.305 | 99.00th=[43779], 99.50th=[44827], 99.90th=[47973], 99.95th=[48497], 00:30:57.305 | 99.99th=[48497] 00:30:57.305 bw ( KiB/s): min= 2227, max= 2467, per=4.07%, avg=2377.05, stdev=73.10, samples=19 00:30:57.305 iops : min= 556, max= 616, avg=594.11, stdev=18.29, samples=19 00:30:57.305 lat (msec) : 10=0.99%, 20=3.67%, 50=95.34% 00:30:57.305 cpu : usr=97.45%, sys=2.18%, ctx=21, majf=0, minf=9 00:30:57.305 IO depths : 1=2.3%, 2=4.5%, 4=13.2%, 8=67.8%, 16=12.2%, 32=0.0%, >=64=0.0% 00:30:57.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.305 complete : 0=0.0%, 4=91.6%, 8=4.6%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.305 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.305 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.305 filename2: (groupid=0, jobs=1): err= 0: pid=4077495: Thu Jul 25 10:45:59 2024 00:30:57.305 read: IOPS=632, BW=2530KiB/s (2591kB/s)(24.7MiB/10002msec) 00:30:57.305 slat (usec): min=6, max=104, avg=25.11, stdev=18.21 00:30:57.305 clat (usec): min=3250, max=46347, avg=25132.02, stdev=4410.14 00:30:57.305 lat (usec): min=3258, max=46354, avg=25157.13, stdev=4413.84 00:30:57.305 clat percentiles (usec): 00:30:57.305 | 1.00th=[13173], 5.00th=[16188], 10.00th=[18744], 20.00th=[24511], 00:30:57.305 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:30:57.305 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27132], 95.00th=[31589], 00:30:57.305 | 99.00th=[39060], 99.50th=[42730], 99.90th=[43779], 99.95th=[44303], 00:30:57.305 | 99.99th=[46400] 00:30:57.305 bw ( KiB/s): min= 2176, max= 2928, per=4.27%, avg=2498.32, stdev=192.44, samples=19 00:30:57.305 iops : min= 544, max= 732, avg=624.42, stdev=48.19, samples=19 00:30:57.305 lat (msec) : 4=0.09%, 10=0.70%, 20=11.00%, 50=88.21% 00:30:57.305 cpu : usr=96.90%, sys=2.73%, ctx=15, majf=0, minf=9 00:30:57.305 IO depths : 1=1.0%, 2=3.3%, 4=11.9%, 8=70.2%, 16=13.6%, 32=0.0%, >=64=0.0% 00:30:57.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.305 complete : 0=0.0%, 4=91.4%, 8=4.8%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.305 issued rwts: total=6326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.305 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.305 filename2: (groupid=0, jobs=1): err= 0: pid=4077496: Thu Jul 25 10:45:59 2024 00:30:57.305 read: IOPS=589, BW=2359KiB/s (2416kB/s)(23.0MiB/10003msec) 00:30:57.305 slat (nsec): min=6072, max=85900, avg=22512.45, stdev=15057.06 00:30:57.305 clat (usec): min=4415, max=44000, avg=26991.02, stdev=3693.06 00:30:57.305 lat (usec): min=4421, max=44020, avg=27013.53, stdev=3691.93 00:30:57.305 clat percentiles (usec): 00:30:57.305 | 1.00th=[19268], 5.00th=[24773], 10.00th=[25297], 20.00th=[25560], 00:30:57.305 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:30:57.305 | 70.00th=[26346], 80.00th=[26870], 90.00th=[30540], 95.00th=[35390], 00:30:57.305 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:30:57.305 | 99.99th=[43779] 00:30:57.305 bw ( KiB/s): min= 1888, max= 2528, per=4.06%, avg=2375.37, stdev=144.30, samples=19 00:30:57.305 iops : min= 472, max= 632, avg=593.68, stdev=36.04, samples=19 00:30:57.305 lat (msec) : 10=0.20%, 20=1.07%, 50=98.73% 00:30:57.305 cpu : usr=97.29%, sys=2.35%, ctx=21, majf=0, minf=9 00:30:57.305 IO depths : 1=1.2%, 2=2.5%, 4=8.6%, 8=72.8%, 16=14.8%, 32=0.0%, >=64=0.0% 00:30:57.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.305 complete : 0=0.0%, 4=91.1%, 8=6.6%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.305 issued rwts: total=5900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.305 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:57.305 00:30:57.305 Run status group 0 (all jobs): 00:30:57.305 READ: bw=57.1MiB/s (59.9MB/s), 2252KiB/s-2544KiB/s (2306kB/s-2605kB/s), io=572MiB (600MB), run=10001-10023msec 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.305 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.306 bdev_null0 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.306 [2024-07-25 10:45:59.673554] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.306 bdev_null1 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:57.306 { 00:30:57.306 "params": { 00:30:57.306 "name": "Nvme$subsystem", 00:30:57.306 "trtype": "$TEST_TRANSPORT", 00:30:57.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.306 "adrfam": "ipv4", 00:30:57.306 "trsvcid": "$NVMF_PORT", 00:30:57.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.306 "hdgst": ${hdgst:-false}, 00:30:57.306 "ddgst": ${ddgst:-false} 00:30:57.306 }, 00:30:57.306 "method": "bdev_nvme_attach_controller" 00:30:57.306 } 00:30:57.306 EOF 00:30:57.306 )") 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:57.306 { 00:30:57.306 "params": { 00:30:57.306 "name": "Nvme$subsystem", 00:30:57.306 "trtype": "$TEST_TRANSPORT", 00:30:57.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.306 "adrfam": "ipv4", 00:30:57.306 "trsvcid": "$NVMF_PORT", 00:30:57.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.306 "hdgst": ${hdgst:-false}, 00:30:57.306 "ddgst": ${ddgst:-false} 00:30:57.306 }, 00:30:57.306 "method": "bdev_nvme_attach_controller" 00:30:57.306 } 00:30:57.306 EOF 00:30:57.306 )") 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:57.306 10:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:57.306 "params": { 00:30:57.306 "name": "Nvme0", 00:30:57.306 "trtype": "tcp", 00:30:57.306 "traddr": "10.0.0.2", 00:30:57.306 "adrfam": "ipv4", 00:30:57.306 "trsvcid": "4420", 00:30:57.306 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:57.306 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:57.306 "hdgst": false, 00:30:57.306 "ddgst": false 00:30:57.306 }, 00:30:57.307 "method": "bdev_nvme_attach_controller" 00:30:57.307 },{ 00:30:57.307 "params": { 00:30:57.307 "name": "Nvme1", 00:30:57.307 "trtype": "tcp", 00:30:57.307 "traddr": "10.0.0.2", 00:30:57.307 "adrfam": "ipv4", 00:30:57.307 "trsvcid": "4420", 00:30:57.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:57.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:57.307 "hdgst": false, 00:30:57.307 "ddgst": false 00:30:57.307 }, 00:30:57.307 "method": "bdev_nvme_attach_controller" 00:30:57.307 }' 00:30:57.307 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:57.307 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:57.307 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:57.307 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:57.307 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:57.307 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:57.307 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:57.307 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:57.307 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:57.307 10:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:57.307 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:57.307 ... 00:30:57.307 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:57.307 ... 00:30:57.307 fio-3.35 00:30:57.307 Starting 4 threads 00:30:57.307 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.585 00:31:02.585 filename0: (groupid=0, jobs=1): err= 0: pid=4079482: Thu Jul 25 10:46:05 2024 00:31:02.585 read: IOPS=2722, BW=21.3MiB/s (22.3MB/s)(106MiB/5001msec) 00:31:02.585 slat (nsec): min=5825, max=38628, avg=8519.97, stdev=2778.44 00:31:02.585 clat (usec): min=1462, max=43621, avg=2915.56, stdev=1084.11 00:31:02.585 lat (usec): min=1469, max=43641, avg=2924.08, stdev=1084.11 00:31:02.585 clat percentiles (usec): 00:31:02.585 | 1.00th=[ 2008], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2540], 00:31:02.585 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 2900], 00:31:02.585 | 70.00th=[ 3032], 80.00th=[ 3228], 90.00th=[ 3490], 95.00th=[ 3752], 00:31:02.585 | 99.00th=[ 4178], 99.50th=[ 4293], 99.90th=[ 4817], 99.95th=[43779], 00:31:02.585 | 99.99th=[43779] 00:31:02.585 bw ( KiB/s): min=20152, max=22576, per=25.25%, avg=21778.67, stdev=696.80, samples=9 00:31:02.585 iops : min= 2519, max= 2822, avg=2722.33, stdev=87.10, samples=9 00:31:02.585 lat (msec) : 2=0.87%, 4=96.60%, 10=2.48%, 50=0.06% 00:31:02.585 cpu : usr=93.24%, sys=6.48%, ctx=10, majf=0, minf=108 00:31:02.585 IO depths : 1=0.4%, 2=2.0%, 4=68.2%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:02.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.585 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.585 issued rwts: total=13614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.585 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:02.585 filename0: (groupid=0, jobs=1): err= 0: pid=4079483: Thu Jul 25 10:46:05 2024 00:31:02.585 read: IOPS=2708, BW=21.2MiB/s (22.2MB/s)(106MiB/5002msec) 00:31:02.585 slat (nsec): min=5837, max=25310, avg=8663.94, stdev=2808.58 00:31:02.585 clat (usec): min=1258, max=44850, avg=2929.98, stdev=1106.51 00:31:02.585 lat (usec): min=1264, max=44874, avg=2938.64, stdev=1106.56 00:31:02.585 clat percentiles (usec): 00:31:02.585 | 1.00th=[ 2057], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2573], 00:31:02.585 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 2900], 00:31:02.585 | 70.00th=[ 3032], 80.00th=[ 3261], 90.00th=[ 3490], 95.00th=[ 3720], 00:31:02.585 | 99.00th=[ 4146], 99.50th=[ 4228], 99.90th=[ 4752], 99.95th=[44827], 00:31:02.585 | 99.99th=[44827] 00:31:02.585 bw ( KiB/s): min=20224, max=22640, per=25.10%, avg=21648.00, stdev=646.32, samples=9 00:31:02.585 iops : min= 2528, max= 2830, avg=2706.00, stdev=80.79, samples=9 00:31:02.585 lat (msec) : 2=0.70%, 4=97.41%, 10=1.83%, 50=0.06% 00:31:02.585 cpu : usr=93.02%, sys=6.68%, ctx=13, majf=0, minf=72 00:31:02.585 IO depths : 1=0.3%, 2=1.8%, 4=68.2%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:02.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.585 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.585 issued rwts: total=13550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.585 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:02.585 filename1: (groupid=0, jobs=1): err= 0: pid=4079484: Thu Jul 25 10:46:05 2024 00:31:02.585 read: IOPS=2712, BW=21.2MiB/s (22.2MB/s)(106MiB/5002msec) 00:31:02.585 slat (nsec): min=5827, max=26796, avg=8674.02, stdev=2952.00 00:31:02.585 clat (usec): min=1030, max=4827, avg=2926.52, stdev=506.81 00:31:02.585 lat (usec): min=1036, max=4839, avg=2935.20, stdev=506.71 00:31:02.585 clat percentiles (usec): 00:31:02.585 | 1.00th=[ 1221], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2606], 00:31:02.585 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2868], 60.00th=[ 2933], 00:31:02.585 | 70.00th=[ 3163], 80.00th=[ 3326], 90.00th=[ 3589], 95.00th=[ 3851], 00:31:02.585 | 99.00th=[ 4178], 99.50th=[ 4293], 99.90th=[ 4490], 99.95th=[ 4621], 00:31:02.585 | 99.99th=[ 4817] 00:31:02.585 bw ( KiB/s): min=21120, max=23616, per=25.17%, avg=21712.00, stdev=743.40, samples=9 00:31:02.585 iops : min= 2640, max= 2952, avg=2714.00, stdev=92.92, samples=9 00:31:02.585 lat (msec) : 2=2.60%, 4=94.39%, 10=3.01% 00:31:02.585 cpu : usr=94.18%, sys=5.54%, ctx=7, majf=0, minf=75 00:31:02.585 IO depths : 1=0.2%, 2=1.6%, 4=67.5%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:02.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.585 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.585 issued rwts: total=13568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.585 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:02.585 filename1: (groupid=0, jobs=1): err= 0: pid=4079485: Thu Jul 25 10:46:05 2024 00:31:02.585 read: IOPS=2705, BW=21.1MiB/s (22.2MB/s)(107MiB/5043msec) 00:31:02.585 slat (nsec): min=5781, max=64306, avg=8625.48, stdev=2795.37 00:31:02.585 clat (usec): min=1564, max=44581, avg=2918.39, stdev=1254.81 00:31:02.585 lat (usec): min=1571, max=44613, avg=2927.02, stdev=1254.83 00:31:02.585 clat percentiles (usec): 00:31:02.585 | 1.00th=[ 2024], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2540], 00:31:02.586 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2900], 00:31:02.586 | 70.00th=[ 2999], 80.00th=[ 3228], 90.00th=[ 3490], 95.00th=[ 3785], 00:31:02.586 | 99.00th=[ 4228], 99.50th=[ 4359], 99.90th=[ 4686], 99.95th=[44303], 00:31:02.586 | 99.99th=[44827] 00:31:02.586 bw ( KiB/s): min=19856, max=22416, per=25.30%, avg=21824.00, stdev=769.22, samples=10 00:31:02.586 iops : min= 2482, max= 2802, avg=2728.00, stdev=96.15, samples=10 00:31:02.586 lat (msec) : 2=0.79%, 4=96.01%, 10=3.12%, 50=0.08% 00:31:02.586 cpu : usr=92.90%, sys=6.82%, ctx=10, majf=0, minf=80 00:31:02.586 IO depths : 1=0.3%, 2=1.7%, 4=68.2%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:02.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.586 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.586 issued rwts: total=13643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.586 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:02.586 00:31:02.586 Run status group 0 (all jobs): 00:31:02.586 READ: bw=84.2MiB/s (88.3MB/s), 21.1MiB/s-21.3MiB/s (22.2MB/s-22.3MB/s), io=425MiB (445MB), run=5001-5043msec 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.586 00:31:02.586 real 0m24.482s 00:31:02.586 user 4m53.297s 00:31:02.586 sys 0m9.626s 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:02.586 10:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.586 ************************************ 00:31:02.586 END TEST fio_dif_rand_params 00:31:02.586 ************************************ 00:31:02.586 10:46:06 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:02.586 10:46:06 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:02.586 10:46:06 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:02.586 10:46:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:02.586 ************************************ 00:31:02.586 START TEST fio_dif_digest 00:31:02.586 ************************************ 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:02.586 bdev_null0 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:02.586 [2024-07-25 10:46:06.235854] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:02.586 { 00:31:02.586 "params": { 00:31:02.586 "name": "Nvme$subsystem", 00:31:02.586 "trtype": "$TEST_TRANSPORT", 00:31:02.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:02.586 "adrfam": "ipv4", 00:31:02.586 "trsvcid": "$NVMF_PORT", 00:31:02.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:02.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:02.586 "hdgst": ${hdgst:-false}, 00:31:02.586 "ddgst": ${ddgst:-false} 00:31:02.586 }, 00:31:02.586 "method": "bdev_nvme_attach_controller" 00:31:02.586 } 00:31:02.586 EOF 00:31:02.586 )") 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:02.586 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:02.587 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:02.587 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.587 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:02.587 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:02.587 10:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:02.587 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:02.587 10:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:02.587 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:02.587 10:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:02.587 10:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:02.587 "params": { 00:31:02.587 "name": "Nvme0", 00:31:02.587 "trtype": "tcp", 00:31:02.587 "traddr": "10.0.0.2", 00:31:02.587 "adrfam": "ipv4", 00:31:02.587 "trsvcid": "4420", 00:31:02.587 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:02.587 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:02.587 "hdgst": true, 00:31:02.587 "ddgst": true 00:31:02.587 }, 00:31:02.587 "method": "bdev_nvme_attach_controller" 00:31:02.587 }' 00:31:02.587 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:02.877 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:02.877 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.877 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:02.877 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:02.877 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:02.877 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:02.877 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:02.877 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:02.877 10:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.138 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:03.138 ... 00:31:03.138 fio-3.35 00:31:03.138 Starting 3 threads 00:31:03.138 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.347 00:31:15.347 filename0: (groupid=0, jobs=1): err= 0: pid=4080685: Thu Jul 25 10:46:17 2024 00:31:15.347 read: IOPS=329, BW=41.2MiB/s (43.2MB/s)(414MiB/10045msec) 00:31:15.347 slat (nsec): min=6086, max=67169, avg=11636.25, stdev=3811.85 00:31:15.347 clat (usec): min=5980, max=53309, avg=9080.56, stdev=3981.83 00:31:15.347 lat (usec): min=5991, max=53322, avg=9092.20, stdev=3982.13 00:31:15.347 clat percentiles (usec): 00:31:15.347 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 7046], 20.00th=[ 7439], 00:31:15.347 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:31:15.347 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10683], 00:31:15.347 | 99.00th=[11731], 99.50th=[51643], 99.90th=[52691], 99.95th=[53216], 00:31:15.347 | 99.99th=[53216] 00:31:15.347 bw ( KiB/s): min=33090, max=49152, per=40.87%, avg=42332.90, stdev=3475.47, samples=20 00:31:15.347 iops : min= 258, max= 384, avg=330.70, stdev=27.22, samples=20 00:31:15.347 lat (msec) : 10=84.22%, 20=14.99%, 50=0.06%, 100=0.73% 00:31:15.347 cpu : usr=93.22%, sys=6.44%, ctx=15, majf=0, minf=135 00:31:15.347 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:15.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.347 issued rwts: total=3309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.347 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:15.347 filename0: (groupid=0, jobs=1): err= 0: pid=4080686: Thu Jul 25 10:46:17 2024 00:31:15.347 read: IOPS=323, BW=40.4MiB/s (42.4MB/s)(406MiB/10045msec) 00:31:15.347 slat (nsec): min=6139, max=67335, avg=12851.87, stdev=4303.90 00:31:15.347 clat (usec): min=4359, max=54258, avg=9249.65, stdev=2224.49 00:31:15.347 lat (usec): min=4368, max=54326, avg=9262.50, stdev=2225.50 00:31:15.347 clat percentiles (usec): 00:31:15.347 | 1.00th=[ 4883], 5.00th=[ 6980], 10.00th=[ 7308], 20.00th=[ 7832], 00:31:15.347 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9765], 00:31:15.347 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10945], 95.00th=[11338], 00:31:15.347 | 99.00th=[12125], 99.50th=[12387], 99.90th=[49546], 99.95th=[53740], 00:31:15.347 | 99.99th=[54264] 00:31:15.347 bw ( KiB/s): min=36608, max=46080, per=40.11%, avg=41548.80, stdev=2596.26, samples=20 00:31:15.347 iops : min= 286, max= 360, avg=324.60, stdev=20.28, samples=20 00:31:15.347 lat (msec) : 10=65.64%, 20=34.21%, 50=0.06%, 100=0.09% 00:31:15.347 cpu : usr=89.18%, sys=8.47%, ctx=1132, majf=0, minf=181 00:31:15.347 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:15.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.347 issued rwts: total=3248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.347 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:15.347 filename0: (groupid=0, jobs=1): err= 0: pid=4080687: Thu Jul 25 10:46:17 2024 00:31:15.347 read: IOPS=156, BW=19.6MiB/s (20.6MB/s)(196MiB/10010msec) 00:31:15.347 slat (nsec): min=6134, max=89355, avg=16285.17, stdev=7622.28 00:31:15.347 clat (msec): min=6, max=100, avg=19.09, stdev=15.25 00:31:15.347 lat (msec): min=6, max=100, avg=19.11, stdev=15.25 00:31:15.347 clat percentiles (msec): 00:31:15.347 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:31:15.347 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:31:15.347 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 55], 95.00th=[ 56], 00:31:15.347 | 99.00th=[ 59], 99.50th=[ 97], 99.90th=[ 101], 99.95th=[ 102], 00:31:15.347 | 99.99th=[ 102] 00:31:15.347 bw ( KiB/s): min=13824, max=30464, per=19.38%, avg=20070.40, stdev=4527.50, samples=20 00:31:15.347 iops : min= 108, max= 238, avg=156.80, stdev=35.37, samples=20 00:31:15.347 lat (msec) : 10=3.69%, 20=83.13%, 50=0.06%, 100=12.99%, 250=0.13% 00:31:15.347 cpu : usr=94.28%, sys=5.45%, ctx=27, majf=0, minf=199 00:31:15.347 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:15.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.347 issued rwts: total=1571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.347 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:15.347 00:31:15.347 Run status group 0 (all jobs): 00:31:15.347 READ: bw=101MiB/s (106MB/s), 19.6MiB/s-41.2MiB/s (20.6MB/s-43.2MB/s), io=1016MiB (1065MB), run=10010-10045msec 00:31:15.347 10:46:17 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:15.347 10:46:17 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:15.347 10:46:17 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:15.347 10:46:17 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:15.347 10:46:17 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:15.347 10:46:17 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:15.347 10:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.347 10:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:15.347 10:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.347 10:46:17 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:15.347 10:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.347 10:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:15.347 10:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.347 00:31:15.347 real 0m11.165s 00:31:15.347 user 0m37.307s 00:31:15.347 sys 0m2.413s 00:31:15.347 10:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:15.347 10:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:15.347 ************************************ 00:31:15.347 END TEST fio_dif_digest 00:31:15.347 ************************************ 00:31:15.347 10:46:17 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:15.347 10:46:17 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:15.347 10:46:17 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:15.347 10:46:17 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:15.347 10:46:17 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:15.347 10:46:17 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:15.347 10:46:17 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:15.347 10:46:17 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:15.347 rmmod nvme_tcp 00:31:15.347 rmmod nvme_fabrics 00:31:15.347 rmmod nvme_keyring 00:31:15.347 10:46:17 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:15.347 10:46:17 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:15.347 10:46:17 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:15.347 10:46:17 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 4071830 ']' 00:31:15.347 10:46:17 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 4071830 00:31:15.347 10:46:17 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 4071830 ']' 00:31:15.347 10:46:17 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 4071830 00:31:15.347 10:46:17 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:31:15.347 10:46:17 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:15.347 10:46:17 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4071830 00:31:15.348 10:46:17 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:15.348 10:46:17 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:15.348 10:46:17 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4071830' 00:31:15.348 killing process with pid 4071830 00:31:15.348 10:46:17 nvmf_dif -- common/autotest_common.sh@969 -- # kill 4071830 00:31:15.348 10:46:17 nvmf_dif -- common/autotest_common.sh@974 -- # wait 4071830 00:31:15.348 10:46:17 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:15.348 10:46:17 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:17.251 Waiting for block devices as requested 00:31:17.251 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:17.251 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:17.251 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:17.251 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:17.251 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:17.511 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:17.511 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:17.511 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:17.511 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:17.770 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:17.770 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:17.770 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:18.066 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:18.066 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:18.066 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:18.325 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:18.325 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:18.325 10:46:21 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:18.325 10:46:21 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:18.325 10:46:21 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:18.325 10:46:21 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:18.325 10:46:21 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.325 10:46:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:18.325 10:46:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.860 10:46:24 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:20.860 00:31:20.860 real 1m15.823s 00:31:20.860 user 7m14.931s 00:31:20.860 sys 0m29.392s 00:31:20.860 10:46:24 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:20.860 10:46:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:20.860 ************************************ 00:31:20.860 END TEST nvmf_dif 00:31:20.860 ************************************ 00:31:20.860 10:46:24 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:20.860 10:46:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:20.860 10:46:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:20.860 10:46:24 -- common/autotest_common.sh@10 -- # set +x 00:31:20.860 ************************************ 00:31:20.860 START TEST nvmf_abort_qd_sizes 00:31:20.860 ************************************ 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:20.860 * Looking for test storage... 00:31:20.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:20.860 10:46:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:27.429 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:27.429 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:27.429 Found net devices under 0000:af:00.0: cvl_0_0 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.429 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:27.430 Found net devices under 0000:af:00.1: cvl_0_1 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:27.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:27.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:31:27.430 00:31:27.430 --- 10.0.0.2 ping statistics --- 00:31:27.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.430 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:27.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:27.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:31:27.430 00:31:27.430 --- 10.0.0.1 ping statistics --- 00:31:27.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.430 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:27.430 10:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:30.720 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:30.720 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:30.720 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:30.720 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:30.720 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:30.720 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:30.720 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:30.720 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:30.720 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:30.720 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:30.720 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:30.720 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:30.720 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:30.720 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:30.720 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:30.720 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:32.098 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=4088943 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 4088943 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 4088943 ']' 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:32.098 10:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:32.098 [2024-07-25 10:46:35.645258] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:31:32.098 [2024-07-25 10:46:35.645314] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.098 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.098 [2024-07-25 10:46:35.720037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:32.098 [2024-07-25 10:46:35.796282] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:32.098 [2024-07-25 10:46:35.796321] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:32.098 [2024-07-25 10:46:35.796330] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:32.098 [2024-07-25 10:46:35.796339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:32.098 [2024-07-25 10:46:35.796362] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:32.098 [2024-07-25 10:46:35.796415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.098 [2024-07-25 10:46:35.796507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:32.098 [2024-07-25 10:46:35.796591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:32.098 [2024-07-25 10:46:35.796593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:33.034 10:46:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:33.034 ************************************ 00:31:33.034 START TEST spdk_target_abort 00:31:33.034 ************************************ 00:31:33.034 10:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:31:33.034 10:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:33.034 10:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:31:33.034 10:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.034 10:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:36.359 spdk_targetn1 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:36.359 [2024-07-25 10:46:39.409269] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:36.359 [2024-07-25 10:46:39.445522] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.359 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:36.360 10:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:36.360 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.648 Initializing NVMe Controllers 00:31:39.648 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:39.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:39.648 Initialization complete. Launching workers. 00:31:39.648 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10825, failed: 0 00:31:39.648 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1475, failed to submit 9350 00:31:39.648 success 858, unsuccess 617, failed 0 00:31:39.648 10:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:39.648 10:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:39.648 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.005 Initializing NVMe Controllers 00:31:43.005 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:43.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:43.005 Initialization complete. Launching workers. 00:31:43.005 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8562, failed: 0 00:31:43.005 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1230, failed to submit 7332 00:31:43.005 success 336, unsuccess 894, failed 0 00:31:43.005 10:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:43.005 10:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:43.005 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.290 Initializing NVMe Controllers 00:31:46.290 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:46.290 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:46.290 Initialization complete. Launching workers. 00:31:46.290 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38546, failed: 0 00:31:46.290 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2766, failed to submit 35780 00:31:46.290 success 618, unsuccess 2148, failed 0 00:31:46.290 10:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:46.290 10:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.290 10:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:46.290 10:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.290 10:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:46.290 10:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.290 10:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:47.668 10:46:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.668 10:46:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 4088943 00:31:47.668 10:46:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 4088943 ']' 00:31:47.668 10:46:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 4088943 00:31:47.668 10:46:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:31:47.668 10:46:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:47.668 10:46:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4088943 00:31:47.668 10:46:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:47.668 10:46:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:47.668 10:46:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4088943' 00:31:47.668 killing process with pid 4088943 00:31:47.668 10:46:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 4088943 00:31:47.668 10:46:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 4088943 00:31:47.668 00:31:47.668 real 0m14.741s 00:31:47.668 user 0m58.377s 00:31:47.668 sys 0m2.788s 00:31:47.668 10:46:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:47.668 10:46:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:47.668 ************************************ 00:31:47.668 END TEST spdk_target_abort 00:31:47.668 ************************************ 00:31:47.668 10:46:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:47.668 10:46:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:47.668 10:46:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:47.668 10:46:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:47.927 ************************************ 00:31:47.927 START TEST kernel_target_abort 00:31:47.927 ************************************ 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:47.927 10:46:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:50.466 Waiting for block devices as requested 00:31:50.466 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:50.466 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:50.466 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:50.725 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:50.725 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:50.725 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:50.984 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:50.984 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:50.984 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:51.244 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:51.244 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:51.244 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:51.503 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:51.503 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:51.503 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:51.762 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:51.762 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:51.762 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:51.762 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:51.762 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:51.762 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:51.762 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:51.762 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:51.762 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:51.762 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:51.762 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:52.021 No valid GPT data, bailing 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:31:52.021 00:31:52.021 Discovery Log Number of Records 2, Generation counter 2 00:31:52.021 =====Discovery Log Entry 0====== 00:31:52.021 trtype: tcp 00:31:52.021 adrfam: ipv4 00:31:52.021 subtype: current discovery subsystem 00:31:52.021 treq: not specified, sq flow control disable supported 00:31:52.021 portid: 1 00:31:52.021 trsvcid: 4420 00:31:52.021 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:52.021 traddr: 10.0.0.1 00:31:52.021 eflags: none 00:31:52.021 sectype: none 00:31:52.021 =====Discovery Log Entry 1====== 00:31:52.021 trtype: tcp 00:31:52.021 adrfam: ipv4 00:31:52.021 subtype: nvme subsystem 00:31:52.021 treq: not specified, sq flow control disable supported 00:31:52.021 portid: 1 00:31:52.021 trsvcid: 4420 00:31:52.021 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:52.021 traddr: 10.0.0.1 00:31:52.021 eflags: none 00:31:52.021 sectype: none 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:52.021 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:52.022 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:52.022 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:52.022 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:52.022 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:52.022 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:52.022 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:52.022 10:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:52.022 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.310 Initializing NVMe Controllers 00:31:55.310 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:55.310 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:55.310 Initialization complete. Launching workers. 00:31:55.310 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74350, failed: 0 00:31:55.310 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 74350, failed to submit 0 00:31:55.310 success 0, unsuccess 74350, failed 0 00:31:55.310 10:46:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:55.310 10:46:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:55.310 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.599 Initializing NVMe Controllers 00:31:58.599 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:58.599 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:58.599 Initialization complete. Launching workers. 00:31:58.599 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 127880, failed: 0 00:31:58.599 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32290, failed to submit 95590 00:31:58.599 success 0, unsuccess 32290, failed 0 00:31:58.599 10:47:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:58.599 10:47:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:58.599 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.893 Initializing NVMe Controllers 00:32:01.893 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:01.893 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:01.893 Initialization complete. Launching workers. 00:32:01.893 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 122782, failed: 0 00:32:01.893 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30710, failed to submit 92072 00:32:01.893 success 0, unsuccess 30710, failed 0 00:32:01.893 10:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:01.893 10:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:01.893 10:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:01.893 10:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:01.893 10:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:01.893 10:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:01.893 10:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:01.893 10:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:01.893 10:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:01.893 10:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:04.463 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:04.463 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:04.463 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:04.463 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:04.464 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:04.464 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:04.464 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:04.464 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:04.464 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:04.464 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:04.464 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:04.464 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:04.464 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:04.464 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:04.464 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:04.464 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:05.883 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:32:06.143 00:32:06.143 real 0m18.248s 00:32:06.143 user 0m7.407s 00:32:06.143 sys 0m5.526s 00:32:06.143 10:47:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:06.143 10:47:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.143 ************************************ 00:32:06.143 END TEST kernel_target_abort 00:32:06.143 ************************************ 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:06.143 rmmod nvme_tcp 00:32:06.143 rmmod nvme_fabrics 00:32:06.143 rmmod nvme_keyring 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 4088943 ']' 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 4088943 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 4088943 ']' 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 4088943 00:32:06.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (4088943) - No such process 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 4088943 is not found' 00:32:06.143 Process with pid 4088943 is not found 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:06.143 10:47:09 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:09.429 Waiting for block devices as requested 00:32:09.429 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:09.429 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:09.429 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:09.429 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:09.429 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:09.429 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:09.429 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:09.429 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:09.429 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:09.688 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:09.688 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:09.688 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:09.948 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:09.948 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:09.948 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:10.207 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:10.207 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:32:10.466 10:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:10.466 10:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:10.466 10:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:10.466 10:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:10.466 10:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.466 10:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:10.466 10:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.371 10:47:15 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:12.371 00:32:12.371 real 0m51.848s 00:32:12.371 user 1m10.006s 00:32:12.371 sys 0m17.967s 00:32:12.371 10:47:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:12.371 10:47:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:12.371 ************************************ 00:32:12.371 END TEST nvmf_abort_qd_sizes 00:32:12.371 ************************************ 00:32:12.371 10:47:16 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:12.371 10:47:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:12.371 10:47:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:12.371 10:47:16 -- common/autotest_common.sh@10 -- # set +x 00:32:12.630 ************************************ 00:32:12.630 START TEST keyring_file 00:32:12.630 ************************************ 00:32:12.630 10:47:16 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:12.630 * Looking for test storage... 00:32:12.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:12.630 10:47:16 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:12.630 10:47:16 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:12.630 10:47:16 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:12.630 10:47:16 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:12.630 10:47:16 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:12.630 10:47:16 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:12.631 10:47:16 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:12.631 10:47:16 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:12.631 10:47:16 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:12.631 10:47:16 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.631 10:47:16 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.631 10:47:16 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.631 10:47:16 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:12.631 10:47:16 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:12.631 10:47:16 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:12.631 10:47:16 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:12.631 10:47:16 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:12.631 10:47:16 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:12.631 10:47:16 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:12.631 10:47:16 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.EhevKbQOeW 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.EhevKbQOeW 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.EhevKbQOeW 00:32:12.631 10:47:16 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.EhevKbQOeW 00:32:12.631 10:47:16 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Q1Gv707LQP 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:12.631 10:47:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Q1Gv707LQP 00:32:12.631 10:47:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Q1Gv707LQP 00:32:12.631 10:47:16 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Q1Gv707LQP 00:32:12.631 10:47:16 keyring_file -- keyring/file.sh@30 -- # tgtpid=4097958 00:32:12.631 10:47:16 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:12.631 10:47:16 keyring_file -- keyring/file.sh@32 -- # waitforlisten 4097958 00:32:12.631 10:47:16 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 4097958 ']' 00:32:12.631 10:47:16 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.631 10:47:16 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:12.631 10:47:16 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.631 10:47:16 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:12.631 10:47:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:12.890 [2024-07-25 10:47:16.360755] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:32:12.890 [2024-07-25 10:47:16.360812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4097958 ] 00:32:12.890 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.890 [2024-07-25 10:47:16.430168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.890 [2024-07-25 10:47:16.503587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.466 10:47:17 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:13.466 10:47:17 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:13.466 10:47:17 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:13.466 10:47:17 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.466 10:47:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:13.466 [2024-07-25 10:47:17.143063] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.466 null0 00:32:13.725 [2024-07-25 10:47:17.175123] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:13.725 [2024-07-25 10:47:17.175419] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:13.725 [2024-07-25 10:47:17.183130] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.725 10:47:17 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:13.725 [2024-07-25 10:47:17.195162] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:13.725 request: 00:32:13.725 { 00:32:13.725 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:13.725 "secure_channel": false, 00:32:13.725 "listen_address": { 00:32:13.725 "trtype": "tcp", 00:32:13.725 "traddr": "127.0.0.1", 00:32:13.725 "trsvcid": "4420" 00:32:13.725 }, 00:32:13.725 "method": "nvmf_subsystem_add_listener", 00:32:13.725 "req_id": 1 00:32:13.725 } 00:32:13.725 Got JSON-RPC error response 00:32:13.725 response: 00:32:13.725 { 00:32:13.725 "code": -32602, 00:32:13.725 "message": "Invalid parameters" 00:32:13.725 } 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:13.725 10:47:17 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:13.725 10:47:17 keyring_file -- keyring/file.sh@46 -- # bperfpid=4098097 00:32:13.725 10:47:17 keyring_file -- keyring/file.sh@48 -- # waitforlisten 4098097 /var/tmp/bperf.sock 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 4098097 ']' 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:13.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:13.725 10:47:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:13.725 [2024-07-25 10:47:17.232940] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:32:13.725 [2024-07-25 10:47:17.232985] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4098097 ] 00:32:13.725 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.725 [2024-07-25 10:47:17.302028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.725 [2024-07-25 10:47:17.370666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:14.662 10:47:18 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:14.662 10:47:18 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:14.662 10:47:18 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EhevKbQOeW 00:32:14.662 10:47:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EhevKbQOeW 00:32:14.662 10:47:18 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Q1Gv707LQP 00:32:14.662 10:47:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Q1Gv707LQP 00:32:14.662 10:47:18 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:14.662 10:47:18 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:14.662 10:47:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:14.662 10:47:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:14.662 10:47:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:14.921 10:47:18 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.EhevKbQOeW == \/\t\m\p\/\t\m\p\.\E\h\e\v\K\b\Q\O\e\W ]] 00:32:14.921 10:47:18 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:14.921 10:47:18 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:14.921 10:47:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:14.921 10:47:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:14.921 10:47:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.181 10:47:18 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Q1Gv707LQP == \/\t\m\p\/\t\m\p\.\Q\1\G\v\7\0\7\L\Q\P ]] 00:32:15.181 10:47:18 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:15.181 10:47:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:15.181 10:47:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:15.181 10:47:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:15.181 10:47:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:15.181 10:47:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.181 10:47:18 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:15.181 10:47:18 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:15.440 10:47:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:15.440 10:47:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:15.440 10:47:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:15.440 10:47:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.440 10:47:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:15.440 10:47:19 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:15.440 10:47:19 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:15.440 10:47:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:15.698 [2024-07-25 10:47:19.203250] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:15.698 nvme0n1 00:32:15.698 10:47:19 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:15.698 10:47:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:15.698 10:47:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:15.698 10:47:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:15.698 10:47:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.698 10:47:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:15.957 10:47:19 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:15.957 10:47:19 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:15.957 10:47:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:15.957 10:47:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:15.957 10:47:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:15.957 10:47:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:15.957 10:47:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.957 10:47:19 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:15.957 10:47:19 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:16.216 Running I/O for 1 seconds... 00:32:17.153 00:32:17.153 Latency(us) 00:32:17.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.153 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:17.153 nvme0n1 : 1.01 12214.36 47.71 0.00 0.00 10426.86 7864.32 18979.23 00:32:17.153 =================================================================================================================== 00:32:17.153 Total : 12214.36 47.71 0.00 0.00 10426.86 7864.32 18979.23 00:32:17.153 0 00:32:17.153 10:47:20 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:17.153 10:47:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:17.412 10:47:20 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:17.412 10:47:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:17.412 10:47:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.412 10:47:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.412 10:47:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:17.412 10:47:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.412 10:47:21 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:17.412 10:47:21 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:17.412 10:47:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:17.412 10:47:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.412 10:47:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.412 10:47:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:17.412 10:47:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.671 10:47:21 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:17.671 10:47:21 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:17.671 10:47:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:17.671 10:47:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:17.671 10:47:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:17.671 10:47:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:17.671 10:47:21 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:17.671 10:47:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:17.671 10:47:21 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:17.671 10:47:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:17.930 [2024-07-25 10:47:21.440100] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:17.930 [2024-07-25 10:47:21.441063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220c840 (107): Transport endpoint is not connected 00:32:17.930 [2024-07-25 10:47:21.442057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220c840 (9): Bad file descriptor 00:32:17.930 [2024-07-25 10:47:21.443058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:17.930 [2024-07-25 10:47:21.443069] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:17.930 [2024-07-25 10:47:21.443078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:17.930 request: 00:32:17.930 { 00:32:17.930 "name": "nvme0", 00:32:17.930 "trtype": "tcp", 00:32:17.930 "traddr": "127.0.0.1", 00:32:17.930 "adrfam": "ipv4", 00:32:17.930 "trsvcid": "4420", 00:32:17.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:17.930 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:17.930 "prchk_reftag": false, 00:32:17.930 "prchk_guard": false, 00:32:17.930 "hdgst": false, 00:32:17.930 "ddgst": false, 00:32:17.930 "psk": "key1", 00:32:17.930 "method": "bdev_nvme_attach_controller", 00:32:17.930 "req_id": 1 00:32:17.930 } 00:32:17.930 Got JSON-RPC error response 00:32:17.930 response: 00:32:17.930 { 00:32:17.930 "code": -5, 00:32:17.930 "message": "Input/output error" 00:32:17.930 } 00:32:17.930 10:47:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:17.930 10:47:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:17.930 10:47:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:17.930 10:47:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:17.930 10:47:21 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:17.930 10:47:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:17.930 10:47:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.930 10:47:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.930 10:47:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:17.930 10:47:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.189 10:47:21 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:18.189 10:47:21 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:18.189 10:47:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:18.189 10:47:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:18.189 10:47:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.189 10:47:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:18.189 10:47:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:18.189 10:47:21 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:18.189 10:47:21 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:18.189 10:47:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:18.448 10:47:21 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:18.448 10:47:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:18.448 10:47:22 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:18.448 10:47:22 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:18.448 10:47:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.709 10:47:22 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:18.709 10:47:22 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.EhevKbQOeW 00:32:18.709 10:47:22 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.EhevKbQOeW 00:32:18.709 10:47:22 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:18.709 10:47:22 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.EhevKbQOeW 00:32:18.709 10:47:22 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:18.709 10:47:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:18.709 10:47:22 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:18.709 10:47:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:18.709 10:47:22 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EhevKbQOeW 00:32:18.709 10:47:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EhevKbQOeW 00:32:18.968 [2024-07-25 10:47:22.474053] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.EhevKbQOeW': 0100660 00:32:18.968 [2024-07-25 10:47:22.474080] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:18.968 request: 00:32:18.968 { 00:32:18.968 "name": "key0", 00:32:18.968 "path": "/tmp/tmp.EhevKbQOeW", 00:32:18.968 "method": "keyring_file_add_key", 00:32:18.968 "req_id": 1 00:32:18.968 } 00:32:18.968 Got JSON-RPC error response 00:32:18.968 response: 00:32:18.968 { 00:32:18.968 "code": -1, 00:32:18.968 "message": "Operation not permitted" 00:32:18.968 } 00:32:18.968 10:47:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:18.968 10:47:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:18.968 10:47:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:18.968 10:47:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:18.968 10:47:22 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.EhevKbQOeW 00:32:18.968 10:47:22 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EhevKbQOeW 00:32:18.968 10:47:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EhevKbQOeW 00:32:19.227 10:47:22 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.EhevKbQOeW 00:32:19.227 10:47:22 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:19.227 10:47:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:19.227 10:47:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.227 10:47:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.227 10:47:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.227 10:47:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:19.227 10:47:22 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:19.227 10:47:22 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.227 10:47:22 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:19.227 10:47:22 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.227 10:47:22 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:19.227 10:47:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:19.227 10:47:22 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:19.227 10:47:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:19.227 10:47:22 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.227 10:47:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.486 [2024-07-25 10:47:23.015480] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.EhevKbQOeW': No such file or directory 00:32:19.486 [2024-07-25 10:47:23.015501] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:19.486 [2024-07-25 10:47:23.015522] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:19.486 [2024-07-25 10:47:23.015547] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:19.486 [2024-07-25 10:47:23.015555] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:19.486 request: 00:32:19.486 { 00:32:19.486 "name": "nvme0", 00:32:19.486 "trtype": "tcp", 00:32:19.486 "traddr": "127.0.0.1", 00:32:19.486 "adrfam": "ipv4", 00:32:19.486 "trsvcid": "4420", 00:32:19.486 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:19.486 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:19.486 "prchk_reftag": false, 00:32:19.486 "prchk_guard": false, 00:32:19.486 "hdgst": false, 00:32:19.486 "ddgst": false, 00:32:19.486 "psk": "key0", 00:32:19.486 "method": "bdev_nvme_attach_controller", 00:32:19.486 "req_id": 1 00:32:19.486 } 00:32:19.486 Got JSON-RPC error response 00:32:19.486 response: 00:32:19.486 { 00:32:19.486 "code": -19, 00:32:19.486 "message": "No such device" 00:32:19.486 } 00:32:19.486 10:47:23 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:19.486 10:47:23 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:19.486 10:47:23 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:19.486 10:47:23 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:19.486 10:47:23 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:19.486 10:47:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:19.746 10:47:23 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:19.746 10:47:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:19.746 10:47:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:19.746 10:47:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:19.746 10:47:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:19.746 10:47:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:19.746 10:47:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sQHETzk19o 00:32:19.746 10:47:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:19.746 10:47:23 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:19.746 10:47:23 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:19.746 10:47:23 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:19.746 10:47:23 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:19.746 10:47:23 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:19.746 10:47:23 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:19.746 10:47:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sQHETzk19o 00:32:19.746 10:47:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sQHETzk19o 00:32:19.746 10:47:23 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.sQHETzk19o 00:32:19.746 10:47:23 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sQHETzk19o 00:32:19.746 10:47:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sQHETzk19o 00:32:19.746 10:47:23 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.746 10:47:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:20.005 nvme0n1 00:32:20.005 10:47:23 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:20.005 10:47:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:20.005 10:47:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:20.005 10:47:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:20.005 10:47:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.005 10:47:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:20.264 10:47:23 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:20.264 10:47:23 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:20.264 10:47:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:20.523 10:47:24 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:20.523 10:47:24 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:20.523 10:47:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:20.523 10:47:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:20.523 10:47:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.523 10:47:24 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:20.523 10:47:24 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:20.523 10:47:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:20.523 10:47:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:20.523 10:47:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:20.523 10:47:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.523 10:47:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:20.782 10:47:24 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:20.782 10:47:24 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:20.782 10:47:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:21.041 10:47:24 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:21.041 10:47:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.041 10:47:24 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:21.041 10:47:24 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:21.041 10:47:24 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sQHETzk19o 00:32:21.041 10:47:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sQHETzk19o 00:32:21.300 10:47:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Q1Gv707LQP 00:32:21.300 10:47:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Q1Gv707LQP 00:32:21.559 10:47:25 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:21.560 10:47:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:21.818 nvme0n1 00:32:21.818 10:47:25 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:21.818 10:47:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:22.078 10:47:25 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:22.078 "subsystems": [ 00:32:22.078 { 00:32:22.078 "subsystem": "keyring", 00:32:22.078 "config": [ 00:32:22.078 { 00:32:22.078 "method": "keyring_file_add_key", 00:32:22.078 "params": { 00:32:22.078 "name": "key0", 00:32:22.078 "path": "/tmp/tmp.sQHETzk19o" 00:32:22.078 } 00:32:22.078 }, 00:32:22.078 { 00:32:22.078 "method": "keyring_file_add_key", 00:32:22.078 "params": { 00:32:22.078 "name": "key1", 00:32:22.078 "path": "/tmp/tmp.Q1Gv707LQP" 00:32:22.078 } 00:32:22.078 } 00:32:22.078 ] 00:32:22.078 }, 00:32:22.078 { 00:32:22.078 "subsystem": "iobuf", 00:32:22.078 "config": [ 00:32:22.078 { 00:32:22.078 "method": "iobuf_set_options", 00:32:22.078 "params": { 00:32:22.078 "small_pool_count": 8192, 00:32:22.078 "large_pool_count": 1024, 00:32:22.078 "small_bufsize": 8192, 00:32:22.078 "large_bufsize": 135168 00:32:22.078 } 00:32:22.078 } 00:32:22.078 ] 00:32:22.078 }, 00:32:22.078 { 00:32:22.078 "subsystem": "sock", 00:32:22.078 "config": [ 00:32:22.078 { 00:32:22.078 "method": "sock_set_default_impl", 00:32:22.078 "params": { 00:32:22.078 "impl_name": "posix" 00:32:22.078 } 00:32:22.078 }, 00:32:22.078 { 00:32:22.078 "method": "sock_impl_set_options", 00:32:22.078 "params": { 00:32:22.078 "impl_name": "ssl", 00:32:22.078 "recv_buf_size": 4096, 00:32:22.078 "send_buf_size": 4096, 00:32:22.078 "enable_recv_pipe": true, 00:32:22.078 "enable_quickack": false, 00:32:22.078 "enable_placement_id": 0, 00:32:22.078 "enable_zerocopy_send_server": true, 00:32:22.078 "enable_zerocopy_send_client": false, 00:32:22.078 "zerocopy_threshold": 0, 00:32:22.078 "tls_version": 0, 00:32:22.078 "enable_ktls": false 00:32:22.078 } 00:32:22.078 }, 00:32:22.078 { 00:32:22.078 "method": "sock_impl_set_options", 00:32:22.078 "params": { 00:32:22.078 "impl_name": "posix", 00:32:22.078 "recv_buf_size": 2097152, 00:32:22.078 "send_buf_size": 2097152, 00:32:22.078 "enable_recv_pipe": true, 00:32:22.078 "enable_quickack": false, 00:32:22.078 "enable_placement_id": 0, 00:32:22.078 "enable_zerocopy_send_server": true, 00:32:22.079 "enable_zerocopy_send_client": false, 00:32:22.079 "zerocopy_threshold": 0, 00:32:22.079 "tls_version": 0, 00:32:22.079 "enable_ktls": false 00:32:22.079 } 00:32:22.079 } 00:32:22.079 ] 00:32:22.079 }, 00:32:22.079 { 00:32:22.079 "subsystem": "vmd", 00:32:22.079 "config": [] 00:32:22.079 }, 00:32:22.079 { 00:32:22.079 "subsystem": "accel", 00:32:22.079 "config": [ 00:32:22.079 { 00:32:22.079 "method": "accel_set_options", 00:32:22.079 "params": { 00:32:22.079 "small_cache_size": 128, 00:32:22.079 "large_cache_size": 16, 00:32:22.079 "task_count": 2048, 00:32:22.079 "sequence_count": 2048, 00:32:22.079 "buf_count": 2048 00:32:22.079 } 00:32:22.079 } 00:32:22.079 ] 00:32:22.079 }, 00:32:22.079 { 00:32:22.079 "subsystem": "bdev", 00:32:22.079 "config": [ 00:32:22.079 { 00:32:22.079 "method": "bdev_set_options", 00:32:22.079 "params": { 00:32:22.079 "bdev_io_pool_size": 65535, 00:32:22.079 "bdev_io_cache_size": 256, 00:32:22.079 "bdev_auto_examine": true, 00:32:22.079 "iobuf_small_cache_size": 128, 00:32:22.079 "iobuf_large_cache_size": 16 00:32:22.079 } 00:32:22.079 }, 00:32:22.079 { 00:32:22.079 "method": "bdev_raid_set_options", 00:32:22.079 "params": { 00:32:22.079 "process_window_size_kb": 1024, 00:32:22.079 "process_max_bandwidth_mb_sec": 0 00:32:22.079 } 00:32:22.079 }, 00:32:22.079 { 00:32:22.079 "method": "bdev_iscsi_set_options", 00:32:22.079 "params": { 00:32:22.079 "timeout_sec": 30 00:32:22.079 } 00:32:22.079 }, 00:32:22.079 { 00:32:22.079 "method": "bdev_nvme_set_options", 00:32:22.079 "params": { 00:32:22.079 "action_on_timeout": "none", 00:32:22.079 "timeout_us": 0, 00:32:22.079 "timeout_admin_us": 0, 00:32:22.079 "keep_alive_timeout_ms": 10000, 00:32:22.079 "arbitration_burst": 0, 00:32:22.079 "low_priority_weight": 0, 00:32:22.079 "medium_priority_weight": 0, 00:32:22.079 "high_priority_weight": 0, 00:32:22.079 "nvme_adminq_poll_period_us": 10000, 00:32:22.079 "nvme_ioq_poll_period_us": 0, 00:32:22.079 "io_queue_requests": 512, 00:32:22.079 "delay_cmd_submit": true, 00:32:22.079 "transport_retry_count": 4, 00:32:22.079 "bdev_retry_count": 3, 00:32:22.079 "transport_ack_timeout": 0, 00:32:22.079 "ctrlr_loss_timeout_sec": 0, 00:32:22.079 "reconnect_delay_sec": 0, 00:32:22.079 "fast_io_fail_timeout_sec": 0, 00:32:22.079 "disable_auto_failback": false, 00:32:22.079 "generate_uuids": false, 00:32:22.079 "transport_tos": 0, 00:32:22.079 "nvme_error_stat": false, 00:32:22.079 "rdma_srq_size": 0, 00:32:22.079 "io_path_stat": false, 00:32:22.079 "allow_accel_sequence": false, 00:32:22.079 "rdma_max_cq_size": 0, 00:32:22.079 "rdma_cm_event_timeout_ms": 0, 00:32:22.079 "dhchap_digests": [ 00:32:22.079 "sha256", 00:32:22.079 "sha384", 00:32:22.079 "sha512" 00:32:22.079 ], 00:32:22.079 "dhchap_dhgroups": [ 00:32:22.079 "null", 00:32:22.079 "ffdhe2048", 00:32:22.079 "ffdhe3072", 00:32:22.079 "ffdhe4096", 00:32:22.079 "ffdhe6144", 00:32:22.079 "ffdhe8192" 00:32:22.079 ] 00:32:22.079 } 00:32:22.079 }, 00:32:22.079 { 00:32:22.079 "method": "bdev_nvme_attach_controller", 00:32:22.079 "params": { 00:32:22.079 "name": "nvme0", 00:32:22.079 "trtype": "TCP", 00:32:22.079 "adrfam": "IPv4", 00:32:22.079 "traddr": "127.0.0.1", 00:32:22.079 "trsvcid": "4420", 00:32:22.079 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.079 "prchk_reftag": false, 00:32:22.079 "prchk_guard": false, 00:32:22.079 "ctrlr_loss_timeout_sec": 0, 00:32:22.079 "reconnect_delay_sec": 0, 00:32:22.079 "fast_io_fail_timeout_sec": 0, 00:32:22.079 "psk": "key0", 00:32:22.079 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:22.079 "hdgst": false, 00:32:22.079 "ddgst": false 00:32:22.079 } 00:32:22.079 }, 00:32:22.079 { 00:32:22.079 "method": "bdev_nvme_set_hotplug", 00:32:22.079 "params": { 00:32:22.079 "period_us": 100000, 00:32:22.079 "enable": false 00:32:22.079 } 00:32:22.079 }, 00:32:22.079 { 00:32:22.079 "method": "bdev_wait_for_examine" 00:32:22.079 } 00:32:22.079 ] 00:32:22.079 }, 00:32:22.079 { 00:32:22.079 "subsystem": "nbd", 00:32:22.079 "config": [] 00:32:22.079 } 00:32:22.079 ] 00:32:22.079 }' 00:32:22.079 10:47:25 keyring_file -- keyring/file.sh@114 -- # killprocess 4098097 00:32:22.079 10:47:25 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 4098097 ']' 00:32:22.079 10:47:25 keyring_file -- common/autotest_common.sh@954 -- # kill -0 4098097 00:32:22.079 10:47:25 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:22.079 10:47:25 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:22.079 10:47:25 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4098097 00:32:22.079 10:47:25 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:22.079 10:47:25 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:22.079 10:47:25 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4098097' 00:32:22.079 killing process with pid 4098097 00:32:22.079 10:47:25 keyring_file -- common/autotest_common.sh@969 -- # kill 4098097 00:32:22.079 Received shutdown signal, test time was about 1.000000 seconds 00:32:22.079 00:32:22.079 Latency(us) 00:32:22.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.079 =================================================================================================================== 00:32:22.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:22.079 10:47:25 keyring_file -- common/autotest_common.sh@974 -- # wait 4098097 00:32:22.339 10:47:25 keyring_file -- keyring/file.sh@117 -- # bperfpid=4099553 00:32:22.339 10:47:25 keyring_file -- keyring/file.sh@119 -- # waitforlisten 4099553 /var/tmp/bperf.sock 00:32:22.339 10:47:25 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 4099553 ']' 00:32:22.339 10:47:25 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:22.339 10:47:25 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:22.339 10:47:25 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:22.339 10:47:25 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:22.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:22.339 10:47:25 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:22.339 10:47:25 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:22.339 "subsystems": [ 00:32:22.339 { 00:32:22.339 "subsystem": "keyring", 00:32:22.339 "config": [ 00:32:22.339 { 00:32:22.339 "method": "keyring_file_add_key", 00:32:22.339 "params": { 00:32:22.339 "name": "key0", 00:32:22.339 "path": "/tmp/tmp.sQHETzk19o" 00:32:22.339 } 00:32:22.339 }, 00:32:22.339 { 00:32:22.339 "method": "keyring_file_add_key", 00:32:22.339 "params": { 00:32:22.339 "name": "key1", 00:32:22.339 "path": "/tmp/tmp.Q1Gv707LQP" 00:32:22.339 } 00:32:22.339 } 00:32:22.339 ] 00:32:22.339 }, 00:32:22.339 { 00:32:22.339 "subsystem": "iobuf", 00:32:22.339 "config": [ 00:32:22.339 { 00:32:22.339 "method": "iobuf_set_options", 00:32:22.339 "params": { 00:32:22.339 "small_pool_count": 8192, 00:32:22.339 "large_pool_count": 1024, 00:32:22.339 "small_bufsize": 8192, 00:32:22.339 "large_bufsize": 135168 00:32:22.339 } 00:32:22.339 } 00:32:22.339 ] 00:32:22.339 }, 00:32:22.339 { 00:32:22.339 "subsystem": "sock", 00:32:22.339 "config": [ 00:32:22.339 { 00:32:22.339 "method": "sock_set_default_impl", 00:32:22.339 "params": { 00:32:22.339 "impl_name": "posix" 00:32:22.339 } 00:32:22.339 }, 00:32:22.339 { 00:32:22.339 "method": "sock_impl_set_options", 00:32:22.339 "params": { 00:32:22.339 "impl_name": "ssl", 00:32:22.339 "recv_buf_size": 4096, 00:32:22.339 "send_buf_size": 4096, 00:32:22.339 "enable_recv_pipe": true, 00:32:22.339 "enable_quickack": false, 00:32:22.339 "enable_placement_id": 0, 00:32:22.339 "enable_zerocopy_send_server": true, 00:32:22.339 "enable_zerocopy_send_client": false, 00:32:22.339 "zerocopy_threshold": 0, 00:32:22.339 "tls_version": 0, 00:32:22.339 "enable_ktls": false 00:32:22.339 } 00:32:22.339 }, 00:32:22.340 { 00:32:22.340 "method": "sock_impl_set_options", 00:32:22.340 "params": { 00:32:22.340 "impl_name": "posix", 00:32:22.340 "recv_buf_size": 2097152, 00:32:22.340 "send_buf_size": 2097152, 00:32:22.340 "enable_recv_pipe": true, 00:32:22.340 "enable_quickack": false, 00:32:22.340 "enable_placement_id": 0, 00:32:22.340 "enable_zerocopy_send_server": true, 00:32:22.340 "enable_zerocopy_send_client": false, 00:32:22.340 "zerocopy_threshold": 0, 00:32:22.340 "tls_version": 0, 00:32:22.340 "enable_ktls": false 00:32:22.340 } 00:32:22.340 } 00:32:22.340 ] 00:32:22.340 }, 00:32:22.340 { 00:32:22.340 "subsystem": "vmd", 00:32:22.340 "config": [] 00:32:22.340 }, 00:32:22.340 { 00:32:22.340 "subsystem": "accel", 00:32:22.340 "config": [ 00:32:22.340 { 00:32:22.340 "method": "accel_set_options", 00:32:22.340 "params": { 00:32:22.340 "small_cache_size": 128, 00:32:22.340 "large_cache_size": 16, 00:32:22.340 "task_count": 2048, 00:32:22.340 "sequence_count": 2048, 00:32:22.340 "buf_count": 2048 00:32:22.340 } 00:32:22.340 } 00:32:22.340 ] 00:32:22.340 }, 00:32:22.340 { 00:32:22.340 "subsystem": "bdev", 00:32:22.340 "config": [ 00:32:22.340 { 00:32:22.340 "method": "bdev_set_options", 00:32:22.340 "params": { 00:32:22.340 "bdev_io_pool_size": 65535, 00:32:22.340 "bdev_io_cache_size": 256, 00:32:22.340 "bdev_auto_examine": true, 00:32:22.340 "iobuf_small_cache_size": 128, 00:32:22.340 "iobuf_large_cache_size": 16 00:32:22.340 } 00:32:22.340 }, 00:32:22.340 { 00:32:22.340 "method": "bdev_raid_set_options", 00:32:22.340 "params": { 00:32:22.340 "process_window_size_kb": 1024, 00:32:22.340 "process_max_bandwidth_mb_sec": 0 00:32:22.340 } 00:32:22.340 }, 00:32:22.340 { 00:32:22.340 "method": "bdev_iscsi_set_options", 00:32:22.340 "params": { 00:32:22.340 "timeout_sec": 30 00:32:22.340 } 00:32:22.340 }, 00:32:22.340 { 00:32:22.340 "method": "bdev_nvme_set_options", 00:32:22.340 "params": { 00:32:22.340 "action_on_timeout": "none", 00:32:22.340 "timeout_us": 0, 00:32:22.340 "timeout_admin_us": 0, 00:32:22.340 "keep_alive_timeout_ms": 10000, 00:32:22.340 "arbitration_burst": 0, 00:32:22.340 "low_priority_weight": 0, 00:32:22.340 "medium_priority_weight": 0, 00:32:22.340 "high_priority_weight": 0, 00:32:22.340 "nvme_adminq_poll_period_us": 10000, 00:32:22.340 "nvme_ioq_poll_period_us": 0, 00:32:22.340 "io_queue_requests": 512, 00:32:22.340 "delay_cmd_submit": true, 00:32:22.340 "transport_retry_count": 4, 00:32:22.340 "bdev_retry_count": 3, 00:32:22.340 "transport_ack_timeout": 0, 00:32:22.340 "ctrlr_loss_timeout_sec": 0, 00:32:22.340 "reconnect_delay_sec": 0, 00:32:22.340 "fast_io_fail_timeout_sec": 0, 00:32:22.340 "disable_auto_failback": false, 00:32:22.340 "generate_uuids": false, 00:32:22.340 "transport_tos": 0, 00:32:22.340 "nvme_error_stat": false, 00:32:22.340 "rdma_srq_size": 0, 00:32:22.340 "io_path_stat": false, 00:32:22.340 "allow_accel_sequence": false, 00:32:22.340 "rdma_max_cq_size": 0, 00:32:22.340 "rdma_cm_event_timeout_ms": 0, 00:32:22.340 "dhchap_digests": [ 00:32:22.340 "sha256", 00:32:22.340 "sha384", 00:32:22.340 "sha512" 00:32:22.340 ], 00:32:22.340 "dhchap_dhgroups": [ 00:32:22.340 "null", 00:32:22.340 "ffdhe2048", 00:32:22.340 "ffdhe3072", 00:32:22.340 "ffdhe4096", 00:32:22.340 "ffdhe6144", 00:32:22.340 "ffdhe8192" 00:32:22.340 ] 00:32:22.340 } 00:32:22.340 }, 00:32:22.340 { 00:32:22.340 "method": "bdev_nvme_attach_controller", 00:32:22.340 "params": { 00:32:22.340 "name": "nvme0", 00:32:22.340 "trtype": "TCP", 00:32:22.340 "adrfam": "IPv4", 00:32:22.340 "traddr": "127.0.0.1", 00:32:22.340 "trsvcid": "4420", 00:32:22.340 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.340 "prchk_reftag": false, 00:32:22.340 "prchk_guard": false, 00:32:22.340 "ctrlr_loss_timeout_sec": 0, 00:32:22.340 "reconnect_delay_sec": 0, 00:32:22.340 "fast_io_fail_timeout_sec": 0, 00:32:22.340 "psk": "key0", 00:32:22.340 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:22.340 "hdgst": false, 00:32:22.340 "ddgst": false 00:32:22.340 } 00:32:22.340 }, 00:32:22.340 { 00:32:22.340 "method": "bdev_nvme_set_hotplug", 00:32:22.340 "params": { 00:32:22.340 "period_us": 100000, 00:32:22.340 "enable": false 00:32:22.340 } 00:32:22.340 }, 00:32:22.340 { 00:32:22.340 "method": "bdev_wait_for_examine" 00:32:22.340 } 00:32:22.340 ] 00:32:22.340 }, 00:32:22.340 { 00:32:22.340 "subsystem": "nbd", 00:32:22.340 "config": [] 00:32:22.340 } 00:32:22.340 ] 00:32:22.340 }' 00:32:22.340 10:47:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:22.340 [2024-07-25 10:47:25.835979] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:32:22.340 [2024-07-25 10:47:25.836033] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4099553 ] 00:32:22.340 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.340 [2024-07-25 10:47:25.907584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.340 [2024-07-25 10:47:25.973379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.599 [2024-07-25 10:47:26.131245] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:23.166 10:47:26 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:23.166 10:47:26 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:23.166 10:47:26 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:23.166 10:47:26 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:23.166 10:47:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.166 10:47:26 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:23.166 10:47:26 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:23.166 10:47:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:23.166 10:47:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:23.166 10:47:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:23.166 10:47:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.166 10:47:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:23.424 10:47:26 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:23.424 10:47:26 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:23.425 10:47:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:23.425 10:47:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:23.425 10:47:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:23.425 10:47:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:23.425 10:47:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.683 10:47:27 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:23.684 10:47:27 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:23.684 10:47:27 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:23.684 10:47:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:23.684 10:47:27 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:23.684 10:47:27 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:23.684 10:47:27 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.sQHETzk19o /tmp/tmp.Q1Gv707LQP 00:32:23.684 10:47:27 keyring_file -- keyring/file.sh@20 -- # killprocess 4099553 00:32:23.684 10:47:27 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 4099553 ']' 00:32:23.684 10:47:27 keyring_file -- common/autotest_common.sh@954 -- # kill -0 4099553 00:32:23.684 10:47:27 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:23.684 10:47:27 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:23.684 10:47:27 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4099553 00:32:23.684 10:47:27 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:23.684 10:47:27 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:23.684 10:47:27 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4099553' 00:32:23.684 killing process with pid 4099553 00:32:23.684 10:47:27 keyring_file -- common/autotest_common.sh@969 -- # kill 4099553 00:32:23.684 Received shutdown signal, test time was about 1.000000 seconds 00:32:23.684 00:32:23.684 Latency(us) 00:32:23.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.684 =================================================================================================================== 00:32:23.684 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:23.684 10:47:27 keyring_file -- common/autotest_common.sh@974 -- # wait 4099553 00:32:23.942 10:47:27 keyring_file -- keyring/file.sh@21 -- # killprocess 4097958 00:32:23.942 10:47:27 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 4097958 ']' 00:32:23.942 10:47:27 keyring_file -- common/autotest_common.sh@954 -- # kill -0 4097958 00:32:23.942 10:47:27 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:23.942 10:47:27 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:23.942 10:47:27 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4097958 00:32:23.942 10:47:27 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:23.942 10:47:27 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:23.942 10:47:27 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4097958' 00:32:23.942 killing process with pid 4097958 00:32:23.942 10:47:27 keyring_file -- common/autotest_common.sh@969 -- # kill 4097958 00:32:23.942 [2024-07-25 10:47:27.616130] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:23.942 10:47:27 keyring_file -- common/autotest_common.sh@974 -- # wait 4097958 00:32:24.679 00:32:24.679 real 0m11.849s 00:32:24.679 user 0m27.381s 00:32:24.679 sys 0m3.245s 00:32:24.679 10:47:27 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:24.679 10:47:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:24.679 ************************************ 00:32:24.679 END TEST keyring_file 00:32:24.679 ************************************ 00:32:24.679 10:47:27 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:32:24.679 10:47:27 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:24.679 10:47:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:24.679 10:47:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:24.679 10:47:27 -- common/autotest_common.sh@10 -- # set +x 00:32:24.679 ************************************ 00:32:24.679 START TEST keyring_linux 00:32:24.680 ************************************ 00:32:24.680 10:47:28 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:24.680 * Looking for test storage... 00:32:24.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:24.680 10:47:28 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:24.680 10:47:28 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:24.680 10:47:28 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.680 10:47:28 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.680 10:47:28 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.680 10:47:28 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.680 10:47:28 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.680 10:47:28 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:24.680 10:47:28 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:24.680 10:47:28 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:24.680 10:47:28 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:24.680 10:47:28 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:24.680 10:47:28 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:24.680 10:47:28 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:24.680 10:47:28 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:24.680 /tmp/:spdk-test:key0 00:32:24.680 10:47:28 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:24.680 10:47:28 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:24.680 10:47:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:24.680 /tmp/:spdk-test:key1 00:32:24.680 10:47:28 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=4100164 00:32:24.680 10:47:28 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:24.680 10:47:28 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 4100164 00:32:24.680 10:47:28 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 4100164 ']' 00:32:24.680 10:47:28 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.681 10:47:28 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:24.681 10:47:28 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.681 10:47:28 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:24.681 10:47:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:24.681 [2024-07-25 10:47:28.271391] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:32:24.681 [2024-07-25 10:47:28.271447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4100164 ] 00:32:24.681 EAL: No free 2048 kB hugepages reported on node 1 00:32:24.681 [2024-07-25 10:47:28.341758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.940 [2024-07-25 10:47:28.415960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.507 10:47:29 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:25.507 10:47:29 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:25.507 10:47:29 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:25.507 10:47:29 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.507 10:47:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:25.507 [2024-07-25 10:47:29.061509] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.507 null0 00:32:25.507 [2024-07-25 10:47:29.093566] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:25.507 [2024-07-25 10:47:29.093953] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:25.507 10:47:29 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.507 10:47:29 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:25.507 833039993 00:32:25.507 10:47:29 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:25.507 236858789 00:32:25.507 10:47:29 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=4100191 00:32:25.507 10:47:29 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 4100191 /var/tmp/bperf.sock 00:32:25.507 10:47:29 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 4100191 ']' 00:32:25.507 10:47:29 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:25.507 10:47:29 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:25.507 10:47:29 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:25.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:25.507 10:47:29 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:25.507 10:47:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:25.507 10:47:29 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:25.507 [2024-07-25 10:47:29.164921] Starting SPDK v24.09-pre git sha1 6f18624d4 / DPDK 24.03.0 initialization... 00:32:25.507 [2024-07-25 10:47:29.164968] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4100191 ] 00:32:25.507 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.766 [2024-07-25 10:47:29.234020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.766 [2024-07-25 10:47:29.303247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.334 10:47:29 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:26.334 10:47:29 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:26.334 10:47:29 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:26.334 10:47:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:26.592 10:47:30 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:26.592 10:47:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:26.851 10:47:30 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:26.851 10:47:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:26.851 [2024-07-25 10:47:30.511382] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:27.110 nvme0n1 00:32:27.110 10:47:30 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:27.110 10:47:30 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:27.110 10:47:30 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:27.110 10:47:30 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:27.110 10:47:30 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:27.110 10:47:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.110 10:47:30 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:27.110 10:47:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:27.110 10:47:30 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:27.110 10:47:30 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:27.110 10:47:30 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:27.110 10:47:30 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:27.110 10:47:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.368 10:47:30 keyring_linux -- keyring/linux.sh@25 -- # sn=833039993 00:32:27.368 10:47:30 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:27.368 10:47:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:27.368 10:47:30 keyring_linux -- keyring/linux.sh@26 -- # [[ 833039993 == \8\3\3\0\3\9\9\9\3 ]] 00:32:27.368 10:47:30 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 833039993 00:32:27.368 10:47:30 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:27.368 10:47:30 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:27.368 Running I/O for 1 seconds... 00:32:28.747 00:32:28.747 Latency(us) 00:32:28.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.747 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:28.747 nvme0n1 : 1.01 13167.29 51.43 0.00 0.00 9683.57 3670.02 14050.92 00:32:28.747 =================================================================================================================== 00:32:28.747 Total : 13167.29 51.43 0.00 0.00 9683.57 3670.02 14050.92 00:32:28.747 0 00:32:28.747 10:47:32 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:28.747 10:47:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:28.747 10:47:32 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:28.747 10:47:32 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:28.747 10:47:32 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:28.747 10:47:32 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:28.747 10:47:32 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:28.747 10:47:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.747 10:47:32 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:28.747 10:47:32 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:28.747 10:47:32 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:28.747 10:47:32 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:28.747 10:47:32 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:32:28.747 10:47:32 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:28.747 10:47:32 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:28.747 10:47:32 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:28.747 10:47:32 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:28.747 10:47:32 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:28.747 10:47:32 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:28.747 10:47:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:29.007 [2024-07-25 10:47:32.585914] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:29.007 [2024-07-25 10:47:32.586436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd50750 (107): Transport endpoint is not connected 00:32:29.007 [2024-07-25 10:47:32.587431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd50750 (9): Bad file descriptor 00:32:29.007 [2024-07-25 10:47:32.588432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:29.007 [2024-07-25 10:47:32.588443] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:29.007 [2024-07-25 10:47:32.588452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:29.007 request: 00:32:29.007 { 00:32:29.007 "name": "nvme0", 00:32:29.007 "trtype": "tcp", 00:32:29.007 "traddr": "127.0.0.1", 00:32:29.007 "adrfam": "ipv4", 00:32:29.007 "trsvcid": "4420", 00:32:29.007 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:29.007 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:29.007 "prchk_reftag": false, 00:32:29.007 "prchk_guard": false, 00:32:29.007 "hdgst": false, 00:32:29.007 "ddgst": false, 00:32:29.007 "psk": ":spdk-test:key1", 00:32:29.007 "method": "bdev_nvme_attach_controller", 00:32:29.007 "req_id": 1 00:32:29.007 } 00:32:29.007 Got JSON-RPC error response 00:32:29.007 response: 00:32:29.007 { 00:32:29.007 "code": -5, 00:32:29.007 "message": "Input/output error" 00:32:29.007 } 00:32:29.007 10:47:32 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:32:29.007 10:47:32 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:29.007 10:47:32 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:29.007 10:47:32 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@33 -- # sn=833039993 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 833039993 00:32:29.007 1 links removed 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@33 -- # sn=236858789 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 236858789 00:32:29.007 1 links removed 00:32:29.007 10:47:32 keyring_linux -- keyring/linux.sh@41 -- # killprocess 4100191 00:32:29.007 10:47:32 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 4100191 ']' 00:32:29.007 10:47:32 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 4100191 00:32:29.007 10:47:32 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:29.007 10:47:32 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:29.007 10:47:32 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4100191 00:32:29.007 10:47:32 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:29.007 10:47:32 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:29.007 10:47:32 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4100191' 00:32:29.007 killing process with pid 4100191 00:32:29.007 10:47:32 keyring_linux -- common/autotest_common.sh@969 -- # kill 4100191 00:32:29.007 Received shutdown signal, test time was about 1.000000 seconds 00:32:29.007 00:32:29.007 Latency(us) 00:32:29.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.007 =================================================================================================================== 00:32:29.007 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:29.007 10:47:32 keyring_linux -- common/autotest_common.sh@974 -- # wait 4100191 00:32:29.267 10:47:32 keyring_linux -- keyring/linux.sh@42 -- # killprocess 4100164 00:32:29.267 10:47:32 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 4100164 ']' 00:32:29.267 10:47:32 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 4100164 00:32:29.267 10:47:32 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:29.267 10:47:32 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:29.267 10:47:32 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4100164 00:32:29.267 10:47:32 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:29.267 10:47:32 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:29.267 10:47:32 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4100164' 00:32:29.267 killing process with pid 4100164 00:32:29.267 10:47:32 keyring_linux -- common/autotest_common.sh@969 -- # kill 4100164 00:32:29.267 10:47:32 keyring_linux -- common/autotest_common.sh@974 -- # wait 4100164 00:32:29.526 00:32:29.526 real 0m5.202s 00:32:29.526 user 0m8.885s 00:32:29.526 sys 0m1.664s 00:32:29.526 10:47:33 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:29.526 10:47:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:29.526 ************************************ 00:32:29.526 END TEST keyring_linux 00:32:29.526 ************************************ 00:32:29.786 10:47:33 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:29.786 10:47:33 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:29.786 10:47:33 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:32:29.786 10:47:33 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:32:29.786 10:47:33 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:32:29.786 10:47:33 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:29.786 10:47:33 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:29.786 10:47:33 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:29.786 10:47:33 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:29.786 10:47:33 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:29.786 10:47:33 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:32:29.786 10:47:33 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:29.786 10:47:33 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:29.786 10:47:33 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:29.786 10:47:33 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:32:29.786 10:47:33 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:32:29.786 10:47:33 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:32:29.786 10:47:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:29.786 10:47:33 -- common/autotest_common.sh@10 -- # set +x 00:32:29.786 10:47:33 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:32:29.786 10:47:33 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:29.786 10:47:33 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:29.786 10:47:33 -- common/autotest_common.sh@10 -- # set +x 00:32:36.462 INFO: APP EXITING 00:32:36.462 INFO: killing all VMs 00:32:36.462 INFO: killing vhost app 00:32:36.462 INFO: EXIT DONE 00:32:38.370 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:38.370 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:38.370 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:38.370 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:38.628 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:38.628 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:38.628 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:38.628 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:38.628 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:38.628 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:38.628 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:38.628 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:38.628 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:38.886 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:38.886 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:38.886 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:38.886 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:32:42.174 Cleaning 00:32:42.174 Removing: /var/run/dpdk/spdk0/config 00:32:42.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:42.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:42.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:42.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:42.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:42.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:42.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:42.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:42.174 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:42.174 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:42.174 Removing: /var/run/dpdk/spdk1/config 00:32:42.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:42.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:42.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:42.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:42.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:42.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:42.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:42.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:42.174 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:42.174 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:42.174 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:42.174 Removing: /var/run/dpdk/spdk2/config 00:32:42.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:42.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:42.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:42.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:42.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:42.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:42.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:42.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:42.174 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:42.174 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:42.174 Removing: /var/run/dpdk/spdk3/config 00:32:42.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:42.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:42.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:42.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:42.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:42.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:42.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:42.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:42.174 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:42.174 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:42.174 Removing: /var/run/dpdk/spdk4/config 00:32:42.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:42.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:42.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:42.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:42.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:42.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:42.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:42.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:42.174 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:42.175 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:42.175 Removing: /dev/shm/bdev_svc_trace.1 00:32:42.175 Removing: /dev/shm/nvmf_trace.0 00:32:42.175 Removing: /dev/shm/spdk_tgt_trace.pid3702429 00:32:42.175 Removing: /var/run/dpdk/spdk0 00:32:42.175 Removing: /var/run/dpdk/spdk1 00:32:42.175 Removing: /var/run/dpdk/spdk2 00:32:42.175 Removing: /var/run/dpdk/spdk3 00:32:42.175 Removing: /var/run/dpdk/spdk4 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3699975 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3701223 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3702429 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3703132 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3703958 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3704228 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3705331 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3705366 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3705720 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3707984 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3709424 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3709738 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3710059 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3710394 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3710720 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3711001 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3711234 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3711479 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3712382 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3715328 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3715620 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3715930 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3716178 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3716741 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3716752 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3717337 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3717575 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3717875 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3717977 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3718183 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3718447 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3718818 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3719099 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3719426 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3723356 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3727857 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3738344 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3738906 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3743423 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3743758 00:32:42.175 Removing: /var/run/dpdk/spdk_pid3748223 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3754900 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3757701 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3768786 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3778221 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3779931 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3780927 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3798614 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3803434 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3849228 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3855329 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3861545 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3867819 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3867873 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3868857 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3869655 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3870506 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3871136 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3871234 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3871494 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3871515 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3871517 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3872483 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3873351 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3874155 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3874742 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3874882 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3875150 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3876347 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3877459 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3886079 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3911660 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3916273 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3918032 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3919880 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3920152 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3920412 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3920527 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3921227 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3923119 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3923991 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3924554 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3927468 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3928076 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3928864 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3933177 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3943618 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3947713 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3953947 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3955415 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3956919 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3961504 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3965754 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3973708 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3973711 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3979034 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3979290 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3979456 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3979829 00:32:42.435 Removing: /var/run/dpdk/spdk_pid3979842 00:32:42.693 Removing: /var/run/dpdk/spdk_pid3984600 00:32:42.693 Removing: /var/run/dpdk/spdk_pid3985253 00:32:42.693 Removing: /var/run/dpdk/spdk_pid3989878 00:32:42.693 Removing: /var/run/dpdk/spdk_pid3992769 00:32:42.693 Removing: /var/run/dpdk/spdk_pid3998331 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4003852 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4012704 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4019997 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4020038 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4039571 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4040329 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4040906 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4041583 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4042548 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4043099 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4043749 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4044432 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4048951 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4049215 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4055286 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4055590 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4057858 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4066072 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4066174 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4071917 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4073924 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4075960 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4077119 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4079228 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4080419 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4089693 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4090216 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4090744 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4093170 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4093614 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4094068 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4097958 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4098097 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4099553 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4100164 00:32:42.693 Removing: /var/run/dpdk/spdk_pid4100191 00:32:42.693 Clean 00:32:42.951 10:47:46 -- common/autotest_common.sh@1451 -- # return 0 00:32:42.951 10:47:46 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:32:42.951 10:47:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:42.951 10:47:46 -- common/autotest_common.sh@10 -- # set +x 00:32:42.951 10:47:46 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:32:42.951 10:47:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:42.951 10:47:46 -- common/autotest_common.sh@10 -- # set +x 00:32:42.951 10:47:46 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:42.951 10:47:46 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:42.951 10:47:46 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:42.951 10:47:46 -- spdk/autotest.sh@395 -- # hash lcov 00:32:42.951 10:47:46 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:42.951 10:47:46 -- spdk/autotest.sh@397 -- # hostname 00:32:42.951 10:47:46 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:43.208 geninfo: WARNING: invalid characters removed from testname! 00:33:05.142 10:48:06 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:05.711 10:48:09 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:07.614 10:48:10 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:08.991 10:48:12 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:10.959 10:48:14 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:12.337 10:48:16 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:14.245 10:48:17 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:14.245 10:48:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:14.245 10:48:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:14.245 10:48:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:14.245 10:48:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:14.245 10:48:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.245 10:48:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.245 10:48:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.245 10:48:17 -- paths/export.sh@5 -- $ export PATH 00:33:14.245 10:48:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.245 10:48:17 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:14.245 10:48:17 -- common/autobuild_common.sh@447 -- $ date +%s 00:33:14.245 10:48:17 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721897297.XXXXXX 00:33:14.245 10:48:17 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721897297.aXp3Mp 00:33:14.245 10:48:17 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:33:14.245 10:48:17 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:33:14.245 10:48:17 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:14.245 10:48:17 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:14.245 10:48:17 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:14.245 10:48:17 -- common/autobuild_common.sh@463 -- $ get_config_params 00:33:14.245 10:48:17 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:33:14.245 10:48:17 -- common/autotest_common.sh@10 -- $ set +x 00:33:14.245 10:48:17 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:14.245 10:48:17 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:33:14.245 10:48:17 -- pm/common@17 -- $ local monitor 00:33:14.245 10:48:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:14.245 10:48:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:14.245 10:48:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:14.245 10:48:17 -- pm/common@21 -- $ date +%s 00:33:14.245 10:48:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:14.245 10:48:17 -- pm/common@21 -- $ date +%s 00:33:14.245 10:48:17 -- pm/common@21 -- $ date +%s 00:33:14.245 10:48:17 -- pm/common@25 -- $ sleep 1 00:33:14.245 10:48:17 -- pm/common@21 -- $ date +%s 00:33:14.245 10:48:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721897297 00:33:14.245 10:48:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721897297 00:33:14.245 10:48:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721897297 00:33:14.245 10:48:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721897297 00:33:14.245 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721897297_collect-cpu-temp.pm.log 00:33:14.245 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721897297_collect-vmstat.pm.log 00:33:14.245 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721897297_collect-cpu-load.pm.log 00:33:14.245 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721897297_collect-bmc-pm.bmc.pm.log 00:33:15.181 10:48:18 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:33:15.181 10:48:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:33:15.181 10:48:18 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:15.181 10:48:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:15.181 10:48:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:15.181 10:48:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:15.181 10:48:18 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:15.181 10:48:18 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:15.181 10:48:18 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:15.181 10:48:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:15.181 10:48:18 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:15.181 10:48:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:15.181 10:48:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:15.181 10:48:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:15.181 10:48:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:15.181 10:48:18 -- pm/common@44 -- $ pid=4111561 00:33:15.181 10:48:18 -- pm/common@50 -- $ kill -TERM 4111561 00:33:15.181 10:48:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:15.181 10:48:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:15.181 10:48:18 -- pm/common@44 -- $ pid=4111563 00:33:15.181 10:48:18 -- pm/common@50 -- $ kill -TERM 4111563 00:33:15.181 10:48:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:15.181 10:48:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:15.181 10:48:18 -- pm/common@44 -- $ pid=4111565 00:33:15.181 10:48:18 -- pm/common@50 -- $ kill -TERM 4111565 00:33:15.181 10:48:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:15.181 10:48:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:15.181 10:48:18 -- pm/common@44 -- $ pid=4111592 00:33:15.181 10:48:18 -- pm/common@50 -- $ sudo -E kill -TERM 4111592 00:33:15.440 + [[ -n 3591209 ]] 00:33:15.440 + sudo kill 3591209 00:33:15.450 [Pipeline] } 00:33:15.467 [Pipeline] // stage 00:33:15.472 [Pipeline] } 00:33:15.489 [Pipeline] // timeout 00:33:15.494 [Pipeline] } 00:33:15.511 [Pipeline] // catchError 00:33:15.516 [Pipeline] } 00:33:15.533 [Pipeline] // wrap 00:33:15.554 [Pipeline] } 00:33:15.569 [Pipeline] // catchError 00:33:15.576 [Pipeline] stage 00:33:15.578 [Pipeline] { (Epilogue) 00:33:15.592 [Pipeline] catchError 00:33:15.593 [Pipeline] { 00:33:15.607 [Pipeline] echo 00:33:15.609 Cleanup processes 00:33:15.615 [Pipeline] sh 00:33:15.901 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:15.901 4111669 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:15.901 4112008 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:15.913 [Pipeline] sh 00:33:16.196 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:16.196 ++ grep -v 'sudo pgrep' 00:33:16.196 ++ awk '{print $1}' 00:33:16.196 + sudo kill -9 4111669 00:33:16.205 [Pipeline] sh 00:33:16.483 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:16.483 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:33:21.755 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:33:25.104 [Pipeline] sh 00:33:25.390 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:25.390 Artifacts sizes are good 00:33:25.405 [Pipeline] archiveArtifacts 00:33:25.412 Archiving artifacts 00:33:25.559 [Pipeline] sh 00:33:25.845 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:25.859 [Pipeline] cleanWs 00:33:25.869 [WS-CLEANUP] Deleting project workspace... 00:33:25.869 [WS-CLEANUP] Deferred wipeout is used... 00:33:25.876 [WS-CLEANUP] done 00:33:25.878 [Pipeline] } 00:33:25.899 [Pipeline] // catchError 00:33:25.911 [Pipeline] sh 00:33:26.191 + logger -p user.info -t JENKINS-CI 00:33:26.200 [Pipeline] } 00:33:26.214 [Pipeline] // stage 00:33:26.220 [Pipeline] } 00:33:26.236 [Pipeline] // node 00:33:26.242 [Pipeline] End of Pipeline 00:33:26.279 Finished: SUCCESS